Search Results

Search found 1941 results on 78 pages for 'id3 metadata'.

Page 59/78 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • 2012 R2 services will not start after promotion to Domain Controller

    - by Cybersylum
    Having a peculiar issue promoting a Windows 2012 R2 server in a domain at 2003 domain/forest functional level. Built a new 2012 R2 server, added the following software (labtech, appassure, eset A/V, & Teamviewer). It activated and appeared to be working fine. I added the Active Directory Domain Services role, and completed the configuration (Domain/Forest Prep, and DC promotion). All appeared to go well. I rebooted the server, and that's where the peculiar stuff began. I noticed the server indicated it needed activated again; but would not accept the key. I verified the key was good. That's when I noticed the Software Protection service (as well as many other core services - Base Filtering engine, DHCP client, firewall, etc) would not start. The error message for all of them was "Access Denied". I called MS, and they wanted to troubleshoot at a service level. Their fix was to use procmon and identify the resource that needed permissions (registry key, file or folder) and add "everyone" with full control). That got the services to start; but the problem re-appeared after a reboot. Thinking the issue might have been with the anti-virus package during the promotion process, I rebuilt the DCs from scratch and removed the metadata from AD (as I could not demote the machines "rpc server unavailble"). I tried to promote the newly built machines again. The only changes to the brand new machines being critical updates. Again the promotion appeared to work fine; but upon reboot (and a long wait to allow replication to occur) similar problems began to re-appear. I have verified that the schema updates are correct (schema version is 69 - for Windows 2012 R2). I am not finding much about this issue through my own searches, so I thought I would post this to see if anyone else has seen anything similar...

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • How to diagnose issue between mobo, RAID, and SSD cache drive? [migrated]

    - by goober
    Background This issue is happening on my custom-built desktop. Relevant specs: Motherboard: ASUS P8Z68-V PRO Utilizing Intel RST technology (application that uses unused SSD as cache) Processor: Intel core i7-2600k (not overclocked) HDDs: RAID1 of 2x Seagate Barracuda 1TB (ST31000524AS) (RAID performed via z68 chipset) Machine has run fine for ~1 year with no issues, and has been well-maintained (dust, etc.) What Happened Random Freezing issues -- intermittent Looked at the RST application screen to see that the acceleration cache was listed as "unavailable" -- recommended that I power down and reconnect the drive. Reconnected the drive to no avail. Attempted to move the drive to another SATA port. Acceleration option disappeared from RST software. Now, the freeze happens whenever loading something particularly data-driven (a video, a game, etc.) Steps Attempted Reconnected the drive to no avail. Updated Intel RST software to v. 11.6.0.1030 to see if that made a difference. Attempted to move the drive to another SATA port. Acceleration option disappeared from RST software. Connected the drive as its own volume. Formatted it, ran disk check errors -- all seems fine. Reconnected the drive and selected it again as the cache drive. Now, what happens when there is a freeze: Machine freezes I am unable to perform any command Screen then goes black I hit the reset button During boot, all drives show as "Disabled" and I am told no volume can be found I then hit the reset button (or power off/on) again. Either the next time (or sometimes after repeating this once more), the metadata cache is reconstructed and the system boots fine, showing the SSD as a cache. Question I believe this is an issue with the SSD itself, but how can I be sure since connecting it separately appeared to show no problems? I want to make sure it's not an issue with the motherboard, SATA ports, etc.

    Read the article

  • Creating a test database with copied data *and* its own data

    - by Jordan Reiter
    I'd like to create a test database that each day is refreshed with data from the production database. BUT, I'd like to be able to create records in the test database and retain them rather than having them be overwritten. I'm wondering if there is a simple straightforward way to do this. Both databases run on the same server, so apparently that rules out replication? For clarification, here is what I would like to happen: Test database is created with production data I create some test records that I want to keep running on the test server (basically so I can have example records that I can play with) Next day, the database is completely refreshed, but the records I created that day are retained. Records that were untouched that day are replaced with records from the production database. The complication is if a record in the production database is deleted, I want it to be deleted on the test database too, so I do want to get rid of records in the test database that no longer exist in the production database, unless those records were created within the test database. Seems like the only way to do this would be to have some sort of table storing metadata about the records being created? So for example, something like this: CREATE TABLE MetaDataRecords ( id integer not null primary key auto_increment, tablename varchar(100), action char(1), pk varchar(100) ); DELETE FROM testdb.users WHERE NOT EXISTS (SELECT * from proddb.users WHERE proddb.users.id=testdb.users.id) AND NOT EXISTS (SELECT * from testdb.MetaDataRecords WHERE testdb.MetaDataRecords.pk=testdb.users.pk AND testdb.MetaDataRecords.action='C' AND testdb.MetaDataRecords.tablename='users' );

    Read the article

  • Active Directory server down, recovering without reinstalling

    - by whatever
    My Windows 2003 server suddenly ceased to function as a DC (this server is the only DC of the domain). All AD related services are down. The only way I can login to the AD is physically to the machine. Everytime I access an AD-related service (e.g. "AD users and computers") I get the below error: Naming information cannot be located because: The specified directory service attribute or value does not exist. Contact your system administrator to verify that your domain is properly configured and is currently online. I found the below system event which matches the time when the issue started, this re-occurs everytime I reboot the server. NTDS General | Global Catalog | Active Directory was unable to establish a connection with the global catalog. Additional Data Error value: 1355 The specified domain either does not exist or could not be contacted. Internal ID: 3200d33 I started the troubleshooting with DNS. Netdiag throws the below error although I think this is simply a consequence of not being able to access the Global Catalog. The procedure entry point DnsGetPrimaryDomainName_UTF8 could not be located in the dynamic link library DNSAPI.dll. Anyway DNS seems OK because I can ping the DC FQDN from the DC itself. I found the below solution which is supposed to help by doing some cleanup of the metadata: http://support.microsoft.com/kb/216498 If I follow procedure 1 here is what I get at step 9: no current site Domain - DC=<mydomain>,DC=<com> no current server no current naming context I can continue the procedure until step 14. I haven't tested step 15 as my understanding is that I will have to reinstall the whole AD again. Is there any way I can recover my AD from there without having to reinstall the whole thing? Update: Yes, the server was powered off/on because reboot would take forever (not because I thought power cycling the unit would fix it more than a reboot).

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • How do I get yum to see updates to a local repo without cleaning cache?

    - by Matt
    I have set up a local yum repository which I use to install test builds. For the testing purposes, my packages are versioned by <svn version number>.<date>.<time> (e.g. 12345.20110908.150404 The trouble is, once I make a new RPM, copy it to the repository directory and run createrepo $REPO_DIR, yum does not see the new RPM as being available. $ cd $REPO_DIR $ ls -1 repodata package-12345.20110908.150404-1.x86_64.rpm package-12345.20110908.174329-1.x86_64.rpm $ createrepo . # ...snip... $ rpm -q package package-12345.20110908.150404-1.x86_64 $ yum list --showduplicates package Installed Packages package.x86_64 12345.20110908.150404-1 @repo Available Packages package.x86_64 12345.20110908.150404-1 repo I can see the updates and grab them if I run yum clean all and then re-fetch the metadata, but I think this just means I need to be doing something else from the repo, as I don't have to do that for other yum repos. How do I need to set up my local repository so that I only need to run yum update from the client without having to clean my yum cache?

    Read the article

  • Modifying value of "Rating" column within Explorer for arbitrary file types

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • Apache on CentOS 5.9 VM serves my optimized images corrupted (but my Mac doesn't)

    - by Robert K
    I'm using a Vagrant VM to mirror the client's environment as closely as I can. As part of our build process we do no optimization of assets early on; that comes as we're ready to take a site live. Needless to say, this issue is beginning to worry me as we need to take the site live very soon. I use ImageOptim to automate optimization of image assets, which runs a whole series of tools (Zopfli, PNGOUT, OptiPNG, AdvPNG, PNGCrush). I always set the optimizations to their maximum setting. After optimization, my PNGs start looking like this: What's weird is, if I serve the same file through my Mac's copy of Apache, not through Vagrant, the image loads fine. In fact, the only time it's ever corrupt like this is when the image is served from the Vagrant VM and its install of Drupal. All optimized JPEGs display only the first ~20% of the image. And PNGs, depending on the image, may show either a portion or the "progressive"-style corruption below. The browser itself makes no difference, the same browser will serve an uncorrupted image from my Mac's Apache instance and a corrupt image from the VM. When I disable all PNG optimizations except PNGCrush, and the removal of the PNG metadata, the image is served corrupted. I'm optimizing JPEG images with JPEGmini. The server is running CentOS 5.9, Apache 2.2.3-85, PHP 5.3.3, and Drupal 7. As best as I can tell the error lies somewhere within the VM, either with Apache or with (perhaps) the network stack. Seems like the tools that optimize the compression of the PNGs and JPEGs are what trigger this error. I've already determined that the .htaccess file isn't interfering with how the images load. What should I try to troubleshoot this?

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • How to make a redundant desktop system with daily snapshots? (Is btrfs ready for use?)

    - by TestUser16418
    I want to configure a desktop system in which the home filesystem would be redundant (e.g. RAID-1), and would have weekly snapshots taken. I've already done this with ZFS, the snapshot system is wonderful, and with send/recv you can easily create backups on external media. Unfortunately, at that point, I want GNU+Linux and not FreeBSD or Solaris, so I'm looking for suggestions for good alternatives. I reckon that my alternatives are: btrfs - it seems to be exactly what I need, it has snapshots and commands that allow you to easily replicate zfs send. Yet all documentation mentions that it's still experimental. I can't seem to find any actual reports on its reliability or usability issues. Can you point me to any information on that issue that could clarify whether it would be a possible choice? I have a large preference for this option, mostly because I don't want to reformat the drives when btrfs becomes ready, but I there's no information on whether it's usable at all, whether it's a silly idea to use it, etc. The question that I cannot get the answer to is what does "experimental" mean. lvm snapshots and ext4 - preferably not, since it can consume an awful amount of space when new files are created. Creating 200 GB files requres 200 GB free space and 200 GB additionally for snapshots. I also have found it unreliable -- failed metadata rewrite results in an unreadable PV. I'm wondering how btrfs would compare here. A single filesystem (ext4) on a RAID-1 array with custom COW snapshots with hardlinks (like cp -al). That's my current preference if I can't use btrfs. So how experimental btrfs is, which should I choose, and do I have any other options? What if I don't keep external incremental backups, would that affect my choice?

    Read the article

  • Modifying value of "Rating" column within Explorer for arbitrary file types.

    - by Fake Name
    Basically, I have a large body of assorted media (text, images, flash files, archives, folders, etc...) and I'm attempting to organize it. Windows Explorer has a rating column, but there seems to be no way to modify the rating of the files short of opening them in their type-specific software (e.g. Media player, or Photo viewer). However, this does not work when the file is of an unsupported type (.rar, .swf ...), or a directory. I'd be more than willing to consider a file-manager replacement (I've alreadly looked at quite a few, Directory Opus, Total Commander, etc...), or even a solution that stores the rating metadata in a hidden file in each folder, or a separate database. The one real critical requirement is the ability to sort by rating, and being filetype-agnostic. Basically, is there any way to categorize a large collection of assorted files by rating that will work with any file type, including directories? - Ideally, there would be an easy way to add arbitrary columns to windows explorer, and edit them directly. However, there seems to be no way to do this. The rating column is the next best thing.

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • How do I get Tomcat 7 to start up faster in Linux CentOS kernel version 2.6.18?

    - by user1786833
    I am experiencing a problem with slow start up times for Tomcat 7. I have done some testing by tweaking configuration parameters both on Linux CentOS kernel version 2.6.18 and on Windows 7 using this link as my primary guide: http://wiki.apache.org/tomcat/HowTo/FasterStartUp and managed only a modest improvement. The improvements seemed to result when I added metadata-complete="true" attribute to the element of my WEB-INF/web.xml file and when I added the names of almost all the jars we use for our application to the tomcat.util.scan.DefaultJarScanner.jarsToSkip property in conf/catalina.properties file. I've also used this JAVA_OPTS in the setenv.sh file: JAVA_OPTS="$JAVA_OPTS -server -Xms1536m -Xmx1536m -XX:MaxPermSize=256m -XX:NewRatio=2 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true " but actually saw my start up times increase slightly. Our QA and production environments are on Linux CentOS so I'm hoping to get more information on improving Tomcat 7 start up times in that environment. My primary role is java developer and I don't have much system administration experience so I appreciate any input. Thank you for your time and suggestions.

    Read the article

  • Windows OS level file open event trigger

    - by john
    Colleagues, I have need to run a script/program on certain basic OS level events. In particular when a file in Windows is opened. The open may be read-only or to edit, and may be initiated by a number of means, either from windows explorer (open or ), be selected from a viewing or editing application from the native file chooser, or drag-n-drop into an editing or viewing application. Further, i need the trigger to "hold" the event from completing the action until the runtime on the program has completed. The event handler program may return a pass state, or fail state. If fail state has been returned, then the event must disallow the initially requested action. Lastly, I need to add to the file in question a property or attribute that will contain metadata that will be used by the above event trigger handler program to make a determination as to the pass/fail condition that will ultimately determine if the user is permitted to open the file. Please note that this is NOT a windows event log situation, but one at the OS level file open event. thanks very much for your help. regards, j

    Read the article

  • Spring Security Configuration Leads to Perpetual Authentication Request

    - by Sammy
    Hello, I have configured my web application with the following config file: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:security="http://www.springframework.org/schema/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <security:global-method-security secured-annotations="enabled" pre-post-annotations="enabled" /> <!-- Filter chain; this is referred to from the web.xml file. Each filter is defined and configured as a bean later on. --> <!-- Note: anonumousProcessingFilter removed. --> <bean id="filterChainProxy" class="org.springframework.security.web.FilterChainProxy"> <security:filter-chain-map path-type="ant"> <security:filter-chain pattern="/**" filters="securityContextPersistenceFilter, basicAuthenticationFilter, exceptionTranslationFilter, filterSecurityInterceptor" /> </security:filter-chain-map> </bean> <!-- This filter is responsible for session management, or rather the lack thereof. --> <bean id="securityContextPersistenceFilter" class="org.springframework.security.web.context.SecurityContextPersistenceFilter"> <property name="securityContextRepository"> <bean class="org.springframework.security.web.context.HttpSessionSecurityContextRepository"> <property name="allowSessionCreation" value="false" /> </bean> </property> </bean> <!-- Basic authentication filter. --> <bean id="basicAuthenticationFilter" class="org.springframework.security.web.authentication.www.BasicAuthenticationFilter"> <property name="authenticationManager" ref="authenticationManager" /> <property name="authenticationEntryPoint" ref="authenticationEntryPoint" /> </bean> <!-- Basic authentication entry point. --> <bean id="authenticationEntryPoint" class="org.springframework.security.web.authentication.www.BasicAuthenticationEntryPoint"> <property name="realmName" value="Ayudo Web Service" /> </bean> <!-- An anonymous authentication filter, which is chained after the normal authentication mechanisms and automatically adds an AnonymousAuthenticationToken to the SecurityContextHolder if there is no existing Authentication held there. --> <!-- <bean id="anonymousProcessingFilter" class="org.springframework.security.web.authentication.AnonymousProcessingFilter"> <property name="key" value="ayudo" /> <property name="userAttribute" value="anonymousUser, ROLE_ANONYMOUS" /> </bean> --> <!-- Authentication manager that chains our main authentication provider and anonymous authentication provider. --> <bean id="authenticationManager" class="org.springframework.security.authentication.ProviderManager"> <property name="providers"> <list> <ref local="daoAuthenticationProvider" /> <ref local="inMemoryAuthenticationProvider" /> <!-- <ref local="anonymousAuthenticationProvider" /> --> </list> </property> </bean> <!-- Main authentication provider; in this case, memory implementation. --> <bean id="inMemoryAuthenticationProvider" class="org.springframework.security.authentication.dao.DaoAuthenticationProvider"> <property name="userDetailsService" ref="propertiesUserDetails" /> </bean> <security:user-service id="propertiesUserDetails" properties="classpath:operators.properties" /> <!-- Main authentication provider. --> <bean id="daoAuthenticationProvider" class="org.springframework.security.authentication.dao.DaoAuthenticationProvider"> <property name="userDetailsService" ref="userDetailsService" /> </bean> <!-- An anonymous authentication provider which is chained into the ProviderManager so that AnonymousAuthenticationTokens are accepted. --> <!-- <bean id="anonymousAuthenticationProvider" class="org.springframework.security.authentication.AnonymousAuthenticationProvider"> <property name="key" value="ayudo" /> </bean> --> <bean id="userDetailsService" class="org.springframework.security.core.userdetails.jdbc.JdbcDaoImpl"> <property name="dataSource" ref="dataSource" /> </bean> <bean id="exceptionTranslationFilter" class="org.springframework.security.web.access.ExceptionTranslationFilter"> <property name="authenticationEntryPoint" ref="authenticationEntryPoint" /> <property name="accessDeniedHandler"> <bean class="org.springframework.security.web.access.AccessDeniedHandlerImpl" /> </property> </bean> <bean id="filterSecurityInterceptor" class="org.springframework.security.web.access.intercept.FilterSecurityInterceptor"> <property name="securityMetadataSource"> <security:filter-security-metadata-source use-expressions="true"> <security:intercept-url pattern="/*.html" access="permitAll" /> <security:intercept-url pattern="/version" access="permitAll" /> <security:intercept-url pattern="/users/activate" access="permitAll" /> <security:intercept-url pattern="/**" access="isAuthenticated()" /> </security:filter-security-metadata-source> </property> <property name="authenticationManager" ref="authenticationManager" /> <property name="accessDecisionManager" ref="accessDecisionManager" /> </bean> <bean id="accessDecisionManager" class="org.springframework.security.access.vote.AffirmativeBased"> <property name="decisionVoters"> <list> <bean class="org.springframework.security.access.vote.RoleVoter" /> <bean class="org.springframework.security.web.access.expression.WebExpressionVoter" /> </list> </property> </bean> As soon as I run my application on tomcat, I get a request for username/password basic authentication dialog. Even when I try to access: localhost:8080/myapp/version, which is explicitly set to permitAll, I get the authentication request dialog. Help! Thank, Sammy

    Read the article

  • Bogus InvalidOperationException (in a DataServiceRequestException)

    - by Andrei Rinea
    I am having a hard time with ADO.NET Data Services (formerly code-named Astoria) as it gives me a bogus exception when I try to insert a new entity from the silverlight client and trying in a clean project (the same code) doesn't. In both cases, however, data is correctly inserted into the database. Using Fiddler (an HTTP debugger I could see that there is no problem in the HTTP communication as I will show later in this question. The code : var ctx = new MyProject123Entities(new Uri("http://andreiri/MyProject.Data/Data.svc")); var i = new Zone() { Data = DateTime.Now, IdElement = 1 }; ctx.AddToZone(i); i.StareZone = new StareZone() { IdStareZone = 1 }; ctx.AttachTo("StareZone", i.StareZone); ctx.SetLink(i, "StareZone", i.StareZone); i.TipZone = new TipZone() { IdTipZone = 1 }; ctx.AttachTo("TipZone", i.TipZone); ctx.SetLink(i, "TipZone", i.TipZone); i.User = new User() { IdUser = 2 }; ctx.AttachTo("User", i.User); ctx.SetLink(i, "User", i.User); ctx.BeginSaveChanges(r =] ctx.EndSaveChanges(r), null); when run the last line (ctx.EndSaveChanges(r)) will throw the following exception : System.Data.Services.Client.DataServiceRequestException was unhandled by user code Message="An error occurred while processing this request." StackTrace: at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleBatchResponse() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.EndRequest() at System.Data.Services.Client.DataServiceContext.EndSaveChanges(IAsyncResult asyncResult) at MyProject.MainPage.[]c__DisplayClassd6.[]c__DisplayClassd8.[dashboard_PostZoneCurent]b__d5(IAsyncResult r) at System.Data.Services.Client.BaseAsyncResult.HandleCompleted() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleCompleted(PerRequest pereq) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndRead(IAsyncResult asyncResult) at System.IO.Stream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndGetResponse(IAsyncResult asyncResult) InnerException: System.InvalidOperationException Message="The context is already tracking a different entity with the same resource Uri." StackTrace: at System.Data.Services.Client.DataServiceContext.AttachTo(Uri identity, Uri editLink, String etag, Object entity, Boolean fail) at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Data.Services.Client.DataServiceContext.HandleResponsePost(ResourceBox entry, MaterializeAtom materializer, Uri editLink, String etag) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.[HandleBatchResponse]d__1d.MoveNext() InnerException: (there is no further information regarding the exception although the ADo.NET Data Service is configured to return detailed informations) However the row is inserted correctly and completely in the database. Using fiddler I can see that the request : <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="MyProject123Model.Zone" /> <title /> <updated>2009-09-11T13:36:46.917157Z</updated> <author> <name /> </author> <id /> <link href="http://andreiri/MyProject.Data/Data.svc/StareZone(1)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/TipZone(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/User(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" /> <content type="application/xml"> <m:properties> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> <d:IdZone m:type="Edm.Int32">0</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> </m:properties> </content> </entry> is well accepted and a successful response is returned : HTTP/1.1 201 Created Date: Fri, 11 Sep 2009 13:36:47 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 DataServiceVersion: 1.0; Location: http://andreiri/MyProject.Data/Data.svc/Zone(75) Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Content-Length: 2213 <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xml:base="http://andreiri/MyProject.Data/Data.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <id>http://andreiri/MyProject.Data/Data.svc/Zone(75)</id> <title type="text"></title> <updated>2009-09-11T13:36:47Z</updated> <author> <name /> </author> <link rel="edit" title="Zone" href="Zone(75)" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/CenterZone" type="application/atom+xml;type=feed" title="CenterZone" href="Zone(75)/CenterZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/ZoneMobil" type="application/atom+xml;type=feed" title="ZoneMobil" href="Zone(75)/ZoneMobil" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" title="StareZone" href="Zone(75)/StareZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" title="TipZone" href="Zone(75)/TipZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" title="User" href="Zone(75)/User" /> <category term="MyProject123Model.Zone" scheme="http://schemas.microsoft.com ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:IdZone m:type="Edm.Int32">75</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> </m:properties> </content> </entry> Why do I get an exception? And, using this in a clean project does not throw the exception..

    Read the article

  • Linq to LLBLGen query problem

    - by Jeroen Breuer
    Hello, I've got a Stored Procedure and i'm trying to convert it to a Linq to LLBLGen query. The query in Linq to LLBGen works, but when I trace the query which is send to sql server it is far from perfect. This is the Stored Procedure: ALTER PROCEDURE [dbo].[spDIGI_GetAllUmbracoProducts] -- Add the parameters for the stored procedure. @searchText nvarchar(255), @startRowIndex int, @maximumRows int, @sortExpression nvarchar(255) AS BEGIN SET @startRowIndex = @startRowIndex + 1 SET @searchText = '%' + @searchText + '%' -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- This is the query which will fetch all the UmbracoProducts. -- This query also supports paging and sorting. WITH UmbracoOverview As ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @sortExpression = 'productName' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode' THEN umbracoProduct.productCode END ASC, CASE WHEN @sortExpression = 'productName DESC' THEN umbracoProduct.productName WHEN @sortExpression = 'productCode DESC' THEN umbracoProduct.productCode END DESC ) AS row_num, umbracoProduct.umbracoProductId, umbracoProduct.productName, umbracoProduct.productCode FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) ) SELECT UmbracoOverview.UmbracoProductId, UmbracoOverview.productName, UmbracoOverview.productCode FROM UmbracoOverview WHERE (row_num >= @startRowIndex AND row_num < (@startRowIndex + @maximumRows)) -- This query will count all the UmbracoProducts. -- This query is used for paging inside ASP.NET. SELECT COUNT (umbracoProduct.umbracoProductId) AS CountNumber FROM umbracoProduct INNER JOIN product ON umbracoProduct.umbracoProductId = product.umbracoProductId WHERE (umbracoProduct.productName LIKE @searchText OR umbracoProduct.productCode LIKE @searchText OR product.code LIKE @searchText OR product.description LIKE @searchText OR product.descriptionLong LIKE @searchText OR product.unitCode LIKE @searchText) END This is my Linq to LLBLGen query: using System.Linq.Dynamic; var q = ( from up in MetaData.UmbracoProduct join p in MetaData.Product on up.UmbracoProductId equals p.UmbracoProductId where up.ProductCode.Contains(searchText) || up.ProductName.Contains(searchText) || p.Code.Contains(searchText) || p.Description.Contains(searchText) || p.DescriptionLong.Contains(searchText) || p.UnitCode.Contains(searchText) select new UmbracoProductOverview { UmbracoProductId = up.UmbracoProductId, ProductName = up.ProductName, ProductCode = up.ProductCode } ).OrderBy(sortExpression); //Save the count in HttpContext.Current.Items. This value will only be saved during 1 single HTTP request. HttpContext.Current.Items["AllProductsCount"] = q.Count(); //Returns the results paged. return q.Skip(startRowIndex).Take(maximumRows).ToList<UmbracoProductOverview>(); This is my Initial expression to process: value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.UmbracoProductEntity]).Join(value(SD.LLBLGen.Pro.LinqSupportClasses.DataSource`1[Eurofysica.DB.EntityClasses.ProductEntity]), up => up.UmbracoProductId, p => p.UmbracoProductId, (up, p) => new <>f__AnonymousType0`2(up = up, p = p)).Where(<>h__TransparentIdentifier0 => (((((<>h__TransparentIdentifier0.up.ProductCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText) || <>h__TransparentIdentifier0.up.ProductName.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Code.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.Description.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.DescriptionLong.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText)) || <>h__TransparentIdentifier0.p.UnitCode.Contains(value(Eurofysica.BusinessLogic.BLL.Controllers.UmbracoProductController+<>c__DisplayClass1).searchText))).Select(<>h__TransparentIdentifier0 => new UmbracoProductOverview() {UmbracoProductId = <>h__TransparentIdentifier0.up.UmbracoProductId, ProductName = <>h__TransparentIdentifier0.up.ProductName, ProductCode = <>h__TransparentIdentifier0.up.ProductCode}).OrderBy( => .ProductName).Count() Now this is how the queries look like that are send to sql server: Select query: Query: SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6)))) Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Count query: Query: SELECT TOP 1 COUNT(*) AS [LPAV_] FROM (SELECT [LPA_L2].[umbracoProductId] AS [UmbracoProductId], [LPA_L2].[productName] AS [ProductName], [LPA_L2].[productCode] AS [ProductCode] FROM ( [eurofysica].[dbo].[umbracoProduct] [LPA_L2] INNER JOIN [eurofysica].[dbo].[product] [LPA_L3] ON [LPA_L2].[umbracoProductId] = [LPA_L3].[umbracoProductId]) WHERE ( ( ( ( ( ( ( ( [LPA_L2].[productCode] LIKE @ProductCode1) OR ( [LPA_L2].[productName] LIKE @ProductName2)) OR ( [LPA_L3].[code] LIKE @Code3)) OR ( [LPA_L3].[description] LIKE @Description4)) OR ( [LPA_L3].[descriptionLong] LIKE @DescriptionLong5)) OR ( [LPA_L3].[unitCode] LIKE @UnitCode6))))) [LPA_L1] Parameter: @ProductCode1 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @ProductName2 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Code3 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @Description4 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @DescriptionLong5 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". Parameter: @UnitCode6 : String. Length: 2. Precision: 0. Scale: 0. Direction: Input. Value: "%%". As you can see no sorting or paging is done (like in my Stored Procedure). This is probably done inside the code after all the results are fetched. This costs a lot of performance! Does anybody know how I can convert my Stored Procedure to Linq to LLBLGen the proper way?

    Read the article

  • Unable to get HTTPS MEX endpoint to work

    - by Rahul
    I have been trying to configure WCF to work with Azure ACS. This WCF configuration has 2 bugs: It does not publish MEX end point. It does not invoke custom behaviour extension. (It just stopped doing that after I made some changes which I can't remember) What could be possibly wrong here? <configuration> <configSections> <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <location path="FederationMetadata"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> </system.web> <system.serviceModel> <services> <service name="production" behaviorConfiguration="AccessServiceBehavior"> <endpoint contract="IMetadataExchange" binding="mexHttpsBinding" address="mex" /> <endpoint address="" binding="customBinding" contract="Samples.RoleBasedAccessControl.Service.IService1" bindingConfiguration="serviceBinding" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="AccessServiceBehavior"> <federatedServiceHostConfiguration /> <sessionExtension/> <useRequestHeadersForMetadataAddress> <defaultPorts> <add scheme="http" port="8000" /> <add scheme="https" port="8443" /> </defaultPorts> </useRequestHeadersForMetadataAddress> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpsGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> <!--Certificate added by FedUtil. Subject='CN=DefaultApplicationCertificate', Issuer='CN=DefaultApplicationCertificate'.--> <serviceCertificate findValue="XXXXXXXXXXXXXXX" storeLocation="LocalMachine" storeName="My" x509FindType="FindByThumbprint" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> <extensions> <behaviorExtensions> <add name="sessionExtension" type="Samples.RoleBasedAccessControl.Service.RsaSessionServiceBehaviorExtension, Samples.RoleBasedAccessControl.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <add name="federatedServiceHostConfiguration" type="Microsoft.IdentityModel.Configuration.ConfigureServiceHostBehaviorExtensionElement, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </behaviorExtensions> </extensions> <protocolMapping> <add scheme="http" binding="customBinding" bindingConfiguration="serviceBinding" /> <add scheme="https" binding="customBinding" bindingConfiguration="serviceBinding"/> </protocolMapping> <bindings> <customBinding> <binding name="serviceBinding"> <security authenticationMode="SecureConversation" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10" requireSecurityContextCancellation="false"> <secureConversationBootstrap authenticationMode="IssuedTokenOverTransport" messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10"> <issuedTokenParameters> <additionalRequestParameters> <AppliesTo xmlns="http://schemas.xmlsoap.org/ws/2004/09/policy"> <EndpointReference xmlns="http://www.w3.org/2005/08/addressing"> <Address>https://127.0.0.1:81/</Address> </EndpointReference> </AppliesTo> </additionalRequestParameters> <claimTypeRequirements> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" isOptional="true" /> <add claimType="http://schemas.microsoft.com/ws/2008/06/identity/claims/role" isOptional="true" /> <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" isOptional="true" /> <add claimType="http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider" isOptional="true" /> </claimTypeRequirements> <issuerMetadata address="https://XXXXYYYY.accesscontrol.windows.net/v2/wstrust/mex" /> </issuedTokenParameters> </secureConversationBootstrap> </security> <httpsTransport /> </binding> </customBinding> </bindings> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> <microsoft.identityModel> <service> <audienceUris> <add value="http://127.0.0.1:81/" /> </audienceUris> <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> <trustedIssuers> <add thumbprint="THUMBPRINT HERE" name="https://XXXYYYY.accesscontrol.windows.net/" /> </trustedIssuers> </issuerNameRegistry> <certificateValidation certificateValidationMode="None" /> </service> </microsoft.identityModel> <appSettings> <add key="FederationMetadataLocation" value="https://XXXYYYY.accesscontrol.windows.net/FederationMetadata/2007-06/FederationMetadata.xml " /> </appSettings> </configuration> Edit: Further implementation details I have the following Behaviour Extension Element (which is not getting invoked currently) public class RsaSessionServiceBehaviorExtension : BehaviorExtensionElement { public override Type BehaviorType { get { return typeof(RsaSessionServiceBehavior); } } protected override object CreateBehavior() { return new RsaSessionServiceBehavior(); } } The namespaces and assemblies are correct in the config. There is more code involved for checking token validation, but in my opinion at least MEX should get published and CreateBehavior() should get invoked in order for me to proceed further.

    Read the article

  • WCF Service returning 400 error: The body of the message cannot be read because it is empty

    - by Josh
    I have a WCF service that is causing a bit of a headache. I have tracing enabled, I have an object with a data contract being built and passed in, but I am seeing this error in the log: <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Error"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier> <Description>Throwing an exception.</Description> <AppDomain>efb0d0d7-1-129315381593520544</AppDomain> <Exception> <ExceptionType>System.ServiceModel.ProtocolException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>There is a problem with the XML that was received from the network. See inner exception for more details.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString> System.ServiceModel.ProtocolException: There is a problem with the XML that was received from the network. See inner exception for more details. ---&amp;gt; System.Xml.XmlException: The body of the message cannot be read because it is empty. --- End of inner exception stack trace --- </ExceptionString> <InnerException> <ExceptionType>System.Xml.XmlException, System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>The body of the message cannot be read because it is empty.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString>System.Xml.XmlException: The body of the message cannot be read because it is empty.</ExceptionString> </InnerException> </Exception> </TraceRecord> </DataItem> </TraceData> So, here is my service interface: [ServiceContract] public interface IRDCService { [OperationContract] Response<Customer> GetCustomer(CustomerRequest request); [OperationContract] Response<Customer> GetSiteCustomers(CustomerRequest request); } And here is my service instance public class RDCService : IRDCService { ICustomerService customerService; public RDCService() { //We have to locate the instance from structuremap manually because web services *REQUIRE* a default constructor customerService = ServiceLocator.Locate<ICustomerService>(); } public Response<Customer> GetCustomer(CustomerRequest request) { return customerService.GetCustomer(request); } public Response<Customer> GetSiteCustomers(CustomerRequest request) { return customerService.GetSiteCustomers(request); } } The configuration for the web service (server side) looks like this: <system.serviceModel> <diagnostics> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <services> <service behaviorConfiguration="MySite.Web.Services.RDCServiceBehavior" name="MySite.Web.Services.RDCService"> <endpoint address="http://localhost:27433" binding="wsHttpBinding" contract="MySite.Common.Services.Web.IRDCService"> <identity> <dns value="localhost:27433" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MySite.Web.Services.RDCServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true"/> <dataContractSerializer maxItemsInObjectGraph="6553600" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Here is what my request object looks like [DataContract] public class CustomerRequest : RequestBase { [DataMember] public int Id { get; set; } [DataMember] public int SiteId { get; set; } } And the RequestBase: [DataContract] public abstract class RequestBase : IRequest { #region IRequest Members [DataMember] public int PageSize { get; set; } [DataMember] public int PageIndex { get; set; } #endregion } And my IRequest interface public interface IRequest { int PageSize { get; set; } int PageIndex { get; set; } } And I have a wrapper class around my service calls. Here is the class. public class MyService : IMyService { IRDCService service; public MyService() { //service = new MySite.RDCService.RDCServiceClient(); EndpointAddress address = new EndpointAddress(APISettings.Default.ServiceUrl); BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.None); binding.TransferMode = TransferMode.Streamed; binding.MaxBufferSize = 65536; binding.MaxReceivedMessageSize = 4194304; ChannelFactory<IRDCService> factory = new ChannelFactory<IRDCService>(binding, address); service = factory.CreateChannel(); } public Response<Customer> GetCustomer(CustomerRequest request) { return service.GetCustomer(request); } public Response<Customer> GetSiteCustomers(CustomerRequest request) { return service.GetSiteCustomers(request); } } and finally, the response object. [DataContract] public class Response<T> { [DataMember] public IEnumerable<T> Results { get; set; } [DataMember] public int TotalResults { get; set; } [DataMember] public int PageIndex { get; set; } [DataMember] public int PageSize { get; set; } [DataMember] public RulesException Exception { get; set; } } So, when I build my CustomerRequest object and pass it in, for some reason it's hitting the server as an empty request. Any ideas why? I've tried upping the object graph and the message size. When I debug it stops in the wrapper class with the 400 error. I'm not sure if there is a serialization error, but considering the object contract is 4 integer properties I can't imagine it causing an issue.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Sunday, November 11, 2012

    CodePlex Daily Summary for Sunday, November 11, 2012Popular ReleasesZXMAK2: Version 2.7.2.0: show extended rzx error info fix reset lag for PROFI ULA 5.xx fix reset behavior fix PROFI ULA timings (thanks to solegstar) fix #FF port for PROFI ULA add ATM710 memory module add new predefined machine configs: ATM Turbo 2, PROFI 3.XX???????: Monitor 2012-11-11: This is the first releaseVidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...ImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadPlayer Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...SQL Server Partitioned Table Framework: Partitioned Table Framework Release 1.0: SQL Server 2012 ReleaseSharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.D3D9Client: D3D9Client R7: New release for Orbiter 2010-P1 - Added horizon/sun angle for night-lights into the configuration file (default 10deg) - Some runway lights related bugs are fixed - Added more configuration options for runway lightsFiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeDictationTool: DictationCool-WPF: • Open a media file to start a new dication. • Open a dct file to continue a dictation. • Compare your dictation with original text if exists. • Save your dictation to dct file, and restore it to continue later. • Save the compared result to html file.MCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesDyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: Initial Release: This is a Initial Release.HTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.Window Manager: Window Manager 1.0: First releaseNew Projectsarteytex: este es una prueba blockworld: An implementation of a goal stack planner.Customer Note: customer note is windows store applicationDraw: ?????????????:??????、CAD??、????。Football Management: Football Management System is web management system for football (soccer) leagues, teams and players. Hijri Converter API: This project is aimed to create a simple RESTful API using VB and ASP.NET to do Hijri-to-Gregorian and Gregorian-to-Hijri conversion.httpclient?????????: httpclient?????????(1)??????????(2)?????????(3)??2012-11-06??,???????。 Imagine Cup 2013: Develop project to Imagine Cup 2013MyAppReji: MyAppN2F Request: The N2F Request object is used to handle interactions between N2F and the global $_REQUEST variable, sanitizing any results which are returned.Orchard Metro Theme: Orchard Metro Theme is a clean and flexible multi-zone theme.Poker Clock And Goodies: poker w8ProjectASPReviewer: Review website for notebooks, tablets and smartphones.Prototype: Its about making an proto type for the final project.Prototype - 7COM0207: 7COM0207 web scripting module, Assignment 2QuickToAD: QuickToAD is a foundational development project for the purpose of jump-starting data-driven application projects.Release Manager: Release Manager is a project to design and develop Windows based Release Management Software.ResW File Code Generator: A Visual Studio 2012 Custom Tool for generating a strongly typed helper class for accessing localized resources from a .ResW file.SEO Tools: This is a website containing some commonly used SEO tools. I have only added a blog ping utility at this time but there is more to come. Thales communicator: A C# library that helps communicate with Thales HSMTrivial: A trivia framework: Trivial is a C# framework that helps you creating custom trivia-like applications.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • CodePlex Daily Summary for Monday, November 12, 2012

    CodePlex Daily Summary for Monday, November 12, 2012Popular ReleasesAX 2012 Custom Operating Units: AXPOM (beta): This is beta version of the tool. There are some known issues that will be fixed in the next upcoming release.????: ???? 1.0: ????Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office: Unicode IVS Add-in for Microsoft Office ??? ?????、Unicode IVS?????????????????Unicode IVS???????????????。??、??????????????、?????????????????????????????。EXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DatePie3.4.2: DatePie3.4.2Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.74: fix for issue #18836 - sometimes throws null-reference errors in ActivationObject.AnalyzeScope method. add back the Context object's 8-parameter constructor, since someone has code that's using it. throw a low-pri warning if an expression statement is == or ===; warn that the developer may have meant an assignment (=). if window.XXXX or window"XXXX" is encountered, add XXXX (as long as it's a valid JavaScript identifier) to the known globals so subsequent references to XXXX won't throw ...Home Access Plus+: v8.3: Changes: Fixed: A odd scroll bar showing up for no reason Changed: Added some code to hopefully sort out the details view when there is a small number of files Fixed: Where the details toolbar shows in the wrong place until you scroll Fixed: Where the Help Desk live tile shows all open tiles, instead of user specific tiles (admins still see all) Added: Powerpoint Files Filter Added: Print style for Booking System Added: Silent check for the logon tracker Updated: Logon Tracker I...Silverlight 4 & 5 Persian DatePicker: Silverlight 4 and 5 Persian DatePicker 1.5: - Improved DateTime Parser.???????: Monitor 2012-11-11: This is the first releaseVidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...ImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadPlayer Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...SQL Server Partitioned Table Framework: Partitioned Table Framework Release 1.0: SQL Server 2012 ReleaseSharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.Fiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeNew Projects.NET C# Wrapper for IQFeed API by Sentinel: An API for DTN IQFeed writtein in C#. Supports: - Historical Data Requests, - Lookup Tables - Level 1 - Level 2 - NewsBaiduMap: ????????API?????Bananagrams: An experimental Java project to find the most efficient strategy of playing Bannagrams, the popular game that I take absolutely no credit for inventing.Cloud Wallet: Don't try to remember every credit card, email, forum and account password of yours. Store them with Cloud Wallet in the skydrive and get them needed.Customer Contact System: System for Local Authorities/Government Bodies to manage enquiries from members of the public in their administrative area.daniel's little electronical bear: Hey, this is Daniel's idea that write her girlfriend a vivid little e-pet. So Daniel is just about to carry on this plan and hopefully fulfill it sooner or later. If you wanna give any advice, please don't hesitate to call me at any time,even ideas is OK!danielzhang0212@gmail.comfacebook page vote and like sample: facebook page vote and like sampleFriendly URL: This application is designed to help large organizations assign simple and memorable URL's to otherwise complicated or non-memorable URL's. GameSDK - Simple Game SDK with events: This game SDK use events for letting know each player through a game board when an events occurs. This is a simple SDK, easily adaptable to your project.GIF o Magic Prototype: GIF o Magic PrototypeHFS+ Driver Installer for Windows: Simple utility to install Read-Only HFS+ Driver to read Mac partitions from Windows.invrt: Simplest-that-could-possibly work inversion of control framework. Written in C#Kwd.Summary: Orchard module to provide alternate summary text for content itemsLogJam: LogJam will provide a modern, efficient, and productive framework for trace logging and metrics logging for all projects. LogJam is currently pre-alpha.Mi-DevEnv: Mi-DevEnv is an integrated set of PowerShell scripts to create virtual machines and populate them with software & tools e.g.: SharePoint,CRM,SQL,Office,K2 & VS.Mltools: mini tools for browser game.More Space Invaders: more then another space invadersMP3 Art Embedder: Embed cover art into MP3 filesMultiple GeoCoder: Serverside geocoding wrapper for various geocoding services. Supports a failover in the event you get throttled.My list: mvc3 project to test some frameworksMyFinalProject_WeamBinjumah: It is a website based on web 2.0 which about sea cruises' experiences with people who love amazing sea cruises. The website will offer most of web 2.0 features.netcached - memcached client for .NET: netcached is a lightweight, high performance .NET client for memcached protocol.paipai: paipaiProjeto Zeus: PROJETO REALIZADO COMO TRABALHO ACADÊMICO.Roy's Tools: Roy's ToolsSuperCotka: SuperCotkaSzoftvertechnológia beadandó: Árva Balázs beadandójának projektvezeto oldalat az ELTE szoftvertechnológia tantárgyára.Twi.Fly: Twi.Fly is designed to change the way that you write most of your code.Coding likes flying, more than twice speed.Unicode IVS Add-in for Microsoft Office: "Unicode IVS Add-in for Microsoft Office" makes Microsoft Office 2007 and 2010 capable to load, save and edit documents contains Unicode IVS.Uyghur Named Date: Generate Uyghur named date string. ???????? ??? ?????? ????? ????? ?????Windows 8 Camp BH: Projeto contendo conteudo de ajuda para o Windows 8 Camp oferecido pelo Microsoft Inovation Center de Belo Horizonte.???????: ??wpf??????? ?? ??

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >