Search Results

Search found 587 results on 24 pages for 'seek'.

Page 16/24 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Ripping a home video VCD on Linux or Windows with VLC or otherwise

    - by user259774
    I have a VCD with 22 minutes of video on it. I would like to retain this footage and throw away the VCD. I can play the whole thing with VLC ("open disc - vcd - /dev/sr0 - play"): all 22 minutes of the main track. I don't believe there's any other content aside from the main track. I can seek to anywhere I want to within the 22 minute track. If I mount /dev/sr0 /media/vcd and then try to copy the only file from the MPEGAV folder, I get an I/O error, with an empty destination file. VLC has a "convert" option in addition to "play". When I use this I actually get a good OGG file back, after it runs through the video in painful real-time. I guess it dubs it frame-by-frame. But the file is only 10 minutes long, leaving 12 minutes off of the track. Handbrake doesn't detect it's track titles, unfortunately. I don't know if I should start getting involved with GNU ddrescue or if it's because VCDs somehow encode their data sectors differently. Anyway, I'm in way over my head and if anyone knows how I could get that video track off the thing, feel free to share! Edit: I should note that I also have access to a Windows computer

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • With Ubuntu 9.10, my DVD keeps spinning up

    - by Ken
    I have Ubuntu 9.10 on my Intel Mac Mini. When there's a DVD in the drive, and even if there's no program open at all (just looking at the desktop), every minute or so the disc spins up with a loud whirring noise, and I can hear it cranking the motor to seek across the disc. How do I find out what's causing this? And how can I make it stop? Thanks! EDIT: I straced nautilus, and saw nothing it's doing directly, even when the disc spins up. It does poll inotify regularly, but I don't know how to trace what it's watching, or if that's even how it receives disc-inserted notifications. It doesn't call inotify_add_watch when I insert a disc or mount it (or eject or umount), but it could be watching all of /dev already or something like that. Of course, a DVD is mounted read-only, so whether it's inotify or something else, it should never need to poll anything on that. And if it is inotify, it's happening in the kernel, and the kernel should really never need to poll a device it's mounted to check for notifications.

    Read the article

  • Formatting an external HDD stuck at 70%

    - by mahmood
    My external HDD which is a 250GB WD (powered by USB) seems to have problem! Whenever i try to copy some files, it stuck while copying. I decided to format it. So I used windows tool and performed the format (not quickly) however at nearly 70% it stuck. Then I decided to perform a low level format with lowlevel. Again it stuck at 70%. I endup that the HDD has bad sector. So is there any tool that mark the bad sectors and bypass them? It is not very reasonable to through 250GB because of some bad sectors! P.S: I saw a similar topic but there were no conclusion there either. The smart data is Attribute, raw value, value, threshold, status Read Error Rate, 50, 200, 51, OK Spin-Up Time, 3275, 154, 21, OK Start/Stop Count, 2729, 98, 0, OK Reallocated Sectors Count,0, 200, 140, OK Seek Error Rate, 0, 100, 51, OK Power-On Hours (POH), 1057, 99, 0, OK Spin Retry Count, 0, 100, 51, OK Recalibration Retries ,0, 100, 51 , OK Power Cycle Count, 1385, 99, 0, OK Power-off Retract Count, 425, 200, 0, OK Load /Unload Cycle Count,12974, 196, 0, OK Temperature, 43, 43, 0, OK Reallocation Event Count,0, 200, 0, OK Current Pending Sector Count,23,200, 0, Degradation Uncorrectable Sector Count, 0, 100, 0, OK UltraDMA CRC Error Count,6, 200, 0, OK Write Error Rate/Multi-Zone Error Rate,0,100,51, OK It seems that the most important thing is this line Current Pending Sector Count,23,200, 0, Degradation Any idea on that?

    Read the article

  • Concatenating ogg video files from the command line

    - by Noufal Ibrahim
    Okay. I've got a few ogg files I've created using a desktop recording tool. I've transcoded them using ffmpeg once (mainly to clip out the beginnings and the ends). Now, I have 3 such files which I want to concatenate into a single .ogv file. I tried using oggCat, it crashed with some kind of error (I tried concatenating a file to itself using oggCat and that failed too leading me to believe that my distro is shipping a broken version of the package). Simply cating the files works but I can't seek which is not cool. mencoder run like this mencoder -ovc lavc -oac lavc file1.ogv file2.ogv file3.ogv -o complete.ogv. It transcodes the files into an avi and clips off a little of the 3 videos. So, how do I do this? Update 1: My current workaround is to transcode the 3 files into .mpg using ffmpeg, then cating them together and then transcoding them back into ogv. Update 2: PiTiVi works for this kind of thing but I need something from the command line that I can automate and script.

    Read the article

  • About to go live: virtual dedicated server or cloud?

    - by morpheous
    I am about to launch my startup company, and we will be going live in a few weeks time. We have really tight budgetary constraints, since we are bootstrapping - and would prefer not to raise external capital. I cant use shared hosting because I need more control of the server machine (for technical reasons - e.g. using proprietary extensions to PHP, Apache and in the database layer as well) - but want to control costs and dont want to go fully private server route, until we have determined the market size etc. So the only real alternatives AFAIK is between virtual server and the cloud. At the moment, cloud services seem a bit "vague" to me. My understanding is that they allow an entity to outsource its IT infrastructure, which in my mind (at least), is indistinguishable from what a hosting provider provides (at least from a functional point of view) - I would like to seek some clarification on exactly what the difference between the two is. Back to my original question, my requirements are: IT infrastructure that can scale with growth Ability to have control of the machine (for e.g. to install our internally developed libraries etc) Backup software that is flexible and comprehensive enough (yet simple to use), that allows a (secured) backup strategy to be implemented. On this issue, I have always wondered where the actual backed up data was stored (since the physical machines are remote, and one cant get access to any actual tapes etc backed onto). I would also like some advice and recommendations in this area. Regarding data size, I am expecting the dataset to be increasing by a few megabytes of data (originally, say 10Mb, in about a years time, possibly 50Mb) every day. As an aside, I have decided to deploy on a Debian server (most of my additional libraries etc were compiled and built on a Debian machine). Mindful of all of the above, I would like some advice (and reason) as to which route to take. I would also like some advice on which backup software to use, from people who have walked a similar path.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • Fast extraction of a time range from syslog logfile?

    - by mike
    I've got a logfile in the standard syslog format. It looks like this, except with hundreds of lines per second: Jan 11 07:48:46 blahblahblah... Jan 11 07:49:00 blahblahblah... Jan 11 07:50:13 blahblahblah... Jan 11 07:51:22 blahblahblah... Jan 11 07:58:04 blahblahblah... It doesn't roll at exactly midnight, but it'll never have more than two days in it. I often have to extract a timeslice from this file. I'd like to write a general-purpose script for this, that I can call like: $ timegrep 22:30-02:00 /logs/something.log ...and have it pull out the lines from 22:30, onward across the midnight boundary, until 2am the next day. There are a few caveats: I don't want to have to bother typing the date(s) on the command line, just the times. The program should be smart enough to figure them out. The log date format doesn't include the year, so it should guess based on the current year, but nonetheless do the right thing around New Year's Day. I want it to be fast -- it should use the fact that the lines are in order to seek around in the file and use a binary search. Before I spend a bunch of time writing this, does it already exist?

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Ubuntu network card problem.

    - by Steve Greene
    Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, it works fine under Windows Vista. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System.

    Read the article

  • Mac, VNC and multiple monitors

    - by MarqueIV
    I asked a similar question here before but apparently I wasn't as clear as I had expected by the responses. That said, I'll try again. I have a Mac Pro with quad monitors which I would like to access remotely. I've been using VNC for this (either via screen sharing or a dedicated VNC client), which works, but the VNC protocol matches the physical layout/resolutions of attached monitors. One of the things I like about Microsoft's Remote Desktop (Terminal Server) client is that when you connect, it blanks out the local screens and sets the resolution to a client-specified setting. In other words, when natively running Windows, even though I'm running a physical 30" monitor flanked by 2 24" monitors as well as a 21" Cintiq monitor, I can set the Remote Desktop resolution to match my notebook's screen giving me a native, single-monitor configuration. As soon as I disconnect (and you log back in locally), the desktop un-blanks and the resolution resets back to the four physically attached monitors. Again, VNC works and yes I know I can use 5901, 5902...n to attach VNC to a specific monitor as opposed to the entire desktop, but I'm still at the mercy of trying to look at a 2560x1600 resolution on a 1280x800 screen. I'm left with either scaling (everything's too small) or panning/scrolling (it's like playing hide-and-seek with your documents!) SO... anyone know of any Mac-based remote software (client and server) that will let me connect to my Mac Pro and reset the resolution by the client, just like you can in Windows, or am I SOL?

    Read the article

  • Looking for a software / something to automate some simple audio processing

    - by Daniel Magliola
    I'm looking for a way to take a 1-hour podcast MP3 file and split it into several several 2-minute MP3s. Along the way, I'd like to also do a few things like Amplify the volume. The problem I'm solving is that I have a crappy MP3 player that won't let me seek forward or backward, nor will it remember where I left it when I turn it off, plus, I listen to these in a seriously high-noise situation. Thus, I need to be able to skip forward in large chunks (2-5 minutes) to the point where I left it. Is there any decent way to do this? Audacity doesn't seem to have command-line capabilities. I'm willing to write some code, for example, to call something over the command line and get how long the MP3 file is, to later know how many pieces i'll have, and then say "create an MP3 with 0:00 to 2:00", "create an MP3 with 2:00 to 4:00", etc. I'm also willing to pay for the right tools if necessary. I also don't care how slow this runs, as long as I can automate it :-) I'm doing this on Windows. Any pointers / ideas? Thanks!

    Read the article

  • Upgrading memory in a laptop

    - by ulidtko
    I'm a bit confused about all the memory types and various bus frequencies of modern consumer PCs. Requesting expert help on the subject. So far I'm confident that: I have an Asus X51L laptop with an unknown set of configuration options. The CPU in there supports PAE, so I still have a chance to extend the memory beyond 3GiB; and the upper limit of the system is 8GiB. (?) The laptop has two SODIMM slots, one of which is occupied by a 2GiB bank, and the other one is empty. dmidecode and lshw tools consistently state 533 Mhz frequency of the bank. The last one confuses me the most. I failed to find out characteristics of the northbridge in this laptop, and still can't figure out what DDR2 to seek for. Is it DDR2-1066? Or, rather, PC2-8500/PC2-8600? Wouldn't a DDR2-800 bank harm the system's performance? Which kind of modules should I look up in stores? Update: I have bought a 2 GiB DDR2-800 SODIMM, and it seams that the system can't handle 4 GiB of memory. When installed by itself in either slot, both new and old bank (which btw happens to be marked GDDR2-677) work just perfectly; i.e. any configuration resulting in 2 GiB works. When both banks are installed though (totalling in 4 GiB), the memcheck86 tool produces horrible artifacts and crashes, and system reboots; an Ubuntu system can be started and even logged into a Unity session, but the system reboots too in this case from even a minor RAM load. So it's pretty obvious to me now that this laptop doesn't support 4 GiB of RAM or more.

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

  • Is it a good Idea to switch to a SSD to use less battery?

    - by Walter Maier-Murdnelch
    I am thinking of buying a SSD for my laptop, mainly for the purpose of extended operating time when running on battery. At the moment I use a Hitachi HTS545032B9A300 (320GB) (Datasheet) as main drive and a Seagate Momentus 5400.3 120GB as secondary drive. I dualboot Windows and Linux but I don't need the windows partition any longer, a 120GB SDD would be more than sufficient space-wise. Speed is not an issue for me, I make heavy use of tmpfs (ramdrive) within Linux and transfers of bigger files are mainly through some network filesystem anyways, thus a cheaper SSD should do. For the purpose of comparison I chose the OCZ Vertex Plus 120GB. Power consumption always is a big promotional thing the industry uses to make me want to buy their SSDs, some sheet on the OCZ page provides an astonishing comparison of desktop HDDS and SSDs. The numbers I got comparing my laptop HDD and their SSD were not really astonishing any longer. Hitachi 320GB HDD: Startup (W, peak, max.) 4.5 Seek (W, avg.) 1.7 Read / Write (W, avg.) 1.4 Performance idle (W, avg.) 1.3 Active idle (W, avg.) 0.8 Low power idle (W, avg.) 0.5 Standby (W, avg.) 0.2 Sleep 0.1 OCZ 120GB SSD: 1.5W active 0.3W standby I see that there are differences, but actually they don't seem that high as I though they were. And compared to the power consuption of the rest of my system I wonder if it makes a difference at all. Have I just taken the wrong look at the whole thing or may I be better off to buy another battery for my laptop?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

  • GDI+ crashes when loading PNG from IStream

    - by konforce
    I wrote something to load PNG files from a custom C++ IStream via GDI+. It worked great until I ran it on Vista machines. Crashes every time. When compiled on VS 2008, I found that inserting code into the IStream::AddRef method, such as a cout, made the problem go away. When compiling with VS 2010, it still crashes regardless of that. I stripped the program down to its basics. I copied a FileStream straight from Microsoft's documentation. It can load PNGs when using Bitmap::FromFile. It can load JPEGs, GIFs, and BMPs via FromFile or FromStream. So in short: on Vista, PNG files loaded via Bitmap::FromStream crash. #pragma comment(lib, "gdiplus.lib") #include <iostream> #include <objidl.h> #include <gdiplus.h> class FileStream : public IStream { public: FileStream(HANDLE hFile) { _refcount = 1; _hFile = hFile; } ~FileStream() { if (_hFile != INVALID_HANDLE_VALUE) { ::CloseHandle(_hFile); } } public: HRESULT static OpenFile(LPCWSTR pName, IStream ** ppStream, bool fWrite) { HANDLE hFile = ::CreateFileW(pName, fWrite ? GENERIC_WRITE : GENERIC_READ, FILE_SHARE_READ, NULL, fWrite ? CREATE_ALWAYS : OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (hFile == INVALID_HANDLE_VALUE) return HRESULT_FROM_WIN32(GetLastError()); *ppStream = new FileStream(hFile); if(*ppStream == NULL) CloseHandle(hFile); return S_OK; } virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, void ** ppvObject) { if (iid == __uuidof(IUnknown) || iid == __uuidof(IStream) || iid == __uuidof(ISequentialStream)) { *ppvObject = static_cast<IStream*>(this); AddRef(); return S_OK; } else return E_NOINTERFACE; } virtual ULONG STDMETHODCALLTYPE AddRef(void) { return (ULONG)InterlockedIncrement(&_refcount); } virtual ULONG STDMETHODCALLTYPE Release(void) { ULONG res = (ULONG) InterlockedDecrement(&_refcount); if (res == 0) delete this; return res; } // ISequentialStream Interface public: virtual HRESULT STDMETHODCALLTYPE Read(void* pv, ULONG cb, ULONG* pcbRead) { ULONG local_pcbRead; BOOL rc = ReadFile(_hFile, pv, cb, &local_pcbRead, NULL); if (pcbRead) *pcbRead = local_pcbRead; return (rc) ? S_OK : HRESULT_FROM_WIN32(GetLastError()); } virtual HRESULT STDMETHODCALLTYPE Write(void const* pv, ULONG cb, ULONG* pcbWritten) { BOOL rc = WriteFile(_hFile, pv, cb, pcbWritten, NULL); return rc ? S_OK : HRESULT_FROM_WIN32(GetLastError()); } // IStream Interface public: virtual HRESULT STDMETHODCALLTYPE SetSize(ULARGE_INTEGER) { return E_NOTIMPL; } virtual HRESULT STDMETHODCALLTYPE CopyTo(IStream*, ULARGE_INTEGER, ULARGE_INTEGER*, ULARGE_INTEGER*) { return E_NOTIMPL; } virtual HRESULT STDMETHODCALLTYPE Commit(DWORD) { return E_NOTIMPL; } virtual HRESULT STDMETHODCALLTYPE Revert(void) { return E_NOTIMPL; } virtual HRESULT STDMETHODCALLTYPE LockRegion(ULARGE_INTEGER, ULARGE_INTEGER, DWORD) { return E_NOTIMPL; } virtual HRESULT STDMETHODCALLTYPE UnlockRegion(ULARGE_INTEGER, ULARGE_INTEGER, DWORD) { return E_NOTIMPL; } virtual HRESULT STDMETHODCALLTYPE Clone(IStream **) { return E_NOTIMPL; } virtual HRESULT STDMETHODCALLTYPE Seek(LARGE_INTEGER liDistanceToMove, DWORD dwOrigin, ULARGE_INTEGER* lpNewFilePointer) { DWORD dwMoveMethod; switch(dwOrigin) { case STREAM_SEEK_SET: dwMoveMethod = FILE_BEGIN; break; case STREAM_SEEK_CUR: dwMoveMethod = FILE_CURRENT; break; case STREAM_SEEK_END: dwMoveMethod = FILE_END; break; default: return STG_E_INVALIDFUNCTION; break; } if (SetFilePointerEx(_hFile, liDistanceToMove, (PLARGE_INTEGER) lpNewFilePointer, dwMoveMethod) == 0) return HRESULT_FROM_WIN32(GetLastError()); return S_OK; } virtual HRESULT STDMETHODCALLTYPE Stat(STATSTG* pStatstg, DWORD grfStatFlag) { if (GetFileSizeEx(_hFile, (PLARGE_INTEGER) &pStatstg->cbSize) == 0) return HRESULT_FROM_WIN32(GetLastError()); return S_OK; } private: volatile HANDLE _hFile; volatile LONG _refcount; }; #define USE_STREAM int main() { Gdiplus::GdiplusStartupInput gdiplusStartupInput; ULONG_PTR gdiplusToken; Gdiplus::GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL); Gdiplus::Bitmap *bmp; #ifndef USE_STREAM bmp = Gdiplus::Bitmap::FromFile(L"test.png", false); if (!bmp) { std::cerr << " Unable to open image file." << std::endl; return 1; } #else IStream *s; if (FileStream::OpenFile(L"test.png", &s, false) != S_OK) { std::cerr << "Unable to open image file." << std::endl; return 1; } bmp = Gdiplus::Bitmap::FromStream(s, false); #endif std::cout << "Image is " << bmp->GetWidth() << " by " << bmp->GetHeight() << std::endl; Gdiplus::GdiplusShutdown(gdiplusToken); #ifdef USE_STREAM s->Release(); #endif return 0; } Tracing and debugging, shows that it does make some calls to the IStream class. It crashes inside of lastResult = DllExports::GdipCreateBitmapFromStream(stream, &bitmap); from GdiPlusBitmap.h, which is a static inline wrapper over the flat API. Other than the reference counting, the only IStream methods it calls is stat (for file size), read, and seek. Call stack looks like: ntdll.dll!_DbgBreakPoint@0() + 0x1 bytes ntdll.dll!_RtlpBreakPointHeap@4() + 0x28 bytes ntdll.dll!_RtlpValidateHeapEntry@12() + 0x70a3c bytes ntdll.dll!_RtlDebugFreeHeap@12() + 0x9a bytes ntdll.dll!@RtlpFreeHeap@16() + 0x13cdd bytes ntdll.dll!_RtlFreeHeap@12() + 0x2e49 bytes kernel32.dll!_HeapFree@12() + 0x14 bytes ole32.dll!CRetailMalloc_Free() + 0x1c bytes ole32.dll!_CoTaskMemFree@4() + 0x13 bytes GdiPlus.dll!GpPngDecoder::GetImageInfo() + 0x68 bytes GdiPlus.dll!GpDecodedImage::InternalGetImageInfo() + 0x3c bytes GdiPlus.dll!GpDecodedImage::GetImageInfo() + 0x18 bytes GdiPlus.dll!CopyOnWriteBitmap::CopyOnWriteBitmap() + 0x49 bytes GdiPlus.dll!CopyOnWriteBitmap::Create() + 0x1d bytes GdiPlus.dll!GpBitmap::GpBitmap() + 0x2c bytes I was unable to find anybody else with the same problem, so I assume there's something wrong with my implementation...

    Read the article

  • Android App Crashes On Second Run

    - by user1091286
    My app runs fine on first run. On the Menu I added two choices options and quit. options which set up a new intent who goes to a PreferenceActivity and quit which simply call: "android.os.Process.killProcess(android.os.Process.myPid());" On the second time I run my app (after I quit from inside the emulator) it crashes.. Ideas? the menu is called by the foolowing code: @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu , menu); return true; } - @Override public boolean onOptionsItemSelected(MenuItem item) { // Set up a new intent between the updater service and the main screen Intent options = new Intent(this, OptionsScreenActivity.class); // Switch case on the options switch (item.getItemId()) { case R.id.options: startActivity(options); return true; case R.id.quit: android.os.Process.killProcess(android.os.Process.myPid()); return true; default: return false; } Code for SeekBarPreference: package com.testapp.logic; import com.testapp.R; import android.content.Context; import android.content.res.TypedArray; import android.preference.Preference; import android.util.AttributeSet; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.view.ViewParent; import android.widget.RelativeLayout; import android.widget.SeekBar; import android.widget.SeekBar.OnSeekBarChangeListener; import android.widget.TextView; public class SeekBarPreference extends Preference implements OnSeekBarChangeListener { private final String TAG = getClass().getName(); private static final String ANDROIDNS="http://schemas.android.com/apk/res/android"; private static final String PREFS="com.testapp.logic"; private static final int DEFAULT_VALUE = 5; private int mMaxValue = 100; private int mMinValue = 1; private int mInterval = 1; private int mCurrentValue; private String mUnitsLeft = ""; private String mUnitsRight = ""; private SeekBar mSeekBar; private TextView mStatusText; public SeekBarPreference(Context context, AttributeSet attrs) { super(context, attrs); initPreference(context, attrs); } public SeekBarPreference(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); initPreference(context, attrs); } private void initPreference(Context context, AttributeSet attrs) { setValuesFromXml(attrs); mSeekBar = new SeekBar(context, attrs); mSeekBar.setMax(mMaxValue - mMinValue); mSeekBar.setOnSeekBarChangeListener(this); } private void setValuesFromXml(AttributeSet attrs) { mMaxValue = attrs.getAttributeIntValue(ANDROIDNS, "max", 100); mMinValue = attrs.getAttributeIntValue(PREFS, "min", 0); mUnitsLeft = getAttributeStringValue(attrs, PREFS, "unitsLeft", ""); String units = getAttributeStringValue(attrs, PREFS, "units", ""); mUnitsRight = getAttributeStringValue(attrs, PREFS, "unitsRight", units); try { String newInterval = attrs.getAttributeValue(PREFS, "interval"); if(newInterval != null) mInterval = Integer.parseInt(newInterval); } catch(Exception e) { Log.e(TAG, "Invalid interval value", e); } } private String getAttributeStringValue(AttributeSet attrs, String namespace, String name, String defaultValue) { String value = attrs.getAttributeValue(namespace, name); if(value == null) value = defaultValue; return value; } @Override protected View onCreateView(ViewGroup parent){ RelativeLayout layout = null; try { LayoutInflater mInflater = (LayoutInflater) getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); layout = (RelativeLayout)mInflater.inflate(R.layout.seek_bar_preference, parent, false); } catch(Exception e) { Log.e(TAG, "Error creating seek bar preference", e); } return layout; } @Override public void onBindView(View view) { super.onBindView(view); try { // move our seekbar to the new view we've been given ViewParent oldContainer = mSeekBar.getParent(); ViewGroup newContainer = (ViewGroup) view.findViewById(R.id.seekBarPrefBarContainer); if (oldContainer != newContainer) { // remove the seekbar from the old view if (oldContainer != null) { ((ViewGroup) oldContainer).removeView(mSeekBar); } // remove the existing seekbar (there may not be one) and add ours newContainer.removeAllViews(); newContainer.addView(mSeekBar, ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT); } } catch(Exception ex) { Log.e(TAG, "Error binding view: " + ex.toString()); } updateView(view); } /** * Update a SeekBarPreference view with our current state * @param view */ protected void updateView(View view) { try { RelativeLayout layout = (RelativeLayout)view; mStatusText = (TextView)layout.findViewById(R.id.seekBarPrefValue); mStatusText.setText(String.valueOf(mCurrentValue)); mStatusText.setMinimumWidth(30); mSeekBar.setProgress(mCurrentValue - mMinValue); TextView unitsRight = (TextView)layout.findViewById(R.id.seekBarPrefUnitsRight); unitsRight.setText(mUnitsRight); TextView unitsLeft = (TextView)layout.findViewById(R.id.seekBarPrefUnitsLeft); unitsLeft.setText(mUnitsLeft); } catch(Exception e) { Log.e(TAG, "Error updating seek bar preference", e); } } public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { int newValue = progress + mMinValue; if(newValue > mMaxValue) newValue = mMaxValue; else if(newValue < mMinValue) newValue = mMinValue; else if(mInterval != 1 && newValue % mInterval != 0) newValue = Math.round(((float)newValue)/mInterval)*mInterval; // change rejected, revert to the previous value if(!callChangeListener(newValue)){ seekBar.setProgress(mCurrentValue - mMinValue); return; } // change accepted, store it mCurrentValue = newValue; mStatusText.setText(String.valueOf(newValue)); persistInt(newValue); } public void onStartTrackingTouch(SeekBar seekBar) {} public void onStopTrackingTouch(SeekBar seekBar) { notifyChanged(); } @Override protected Object onGetDefaultValue(TypedArray ta, int index){ int defaultValue = ta.getInt(index, DEFAULT_VALUE); return defaultValue; } @Override protected void onSetInitialValue(boolean restoreValue, Object defaultValue) { if(restoreValue) { mCurrentValue = getPersistedInt(mCurrentValue); } else { int temp = 0; try { temp = (Integer)defaultValue; } catch(Exception ex) { Log.e(TAG, "Invalid default value: " + defaultValue.toString()); } persistInt(temp); mCurrentValue = temp; } } } Logcat: E/AndroidRuntime( 4525): FATAL EXCEPTION: main E/AndroidRuntime( 4525): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{com.ui.testapp/com.logic.testapp.SeekBarPreferen ce}: java.lang.InstantiationException: can't instantiate class com.logic.testapp.SeekBarPreference; no empty constructor E/AndroidRuntime( 4525): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1879) E/AndroidRuntime( 4525): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1980) E/AndroidRuntime( 4525): at android.app.ActivityThread.access$600(ActivityThread.java:122) E/AndroidRuntime( 4525): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1146) E/AndroidRuntime( 4525): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 4525): at android.os.Looper.loop(Looper.java:137) E/AndroidRuntime( 4525): at android.app.ActivityThread.main(ActivityThread.java:4340) E/AndroidRuntime( 4525): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 4525): at java.lang.reflect.Method.invoke(Method.java:511) E/AndroidRuntime( 4525): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) E/AndroidRuntime( 4525): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) E/AndroidRuntime( 4525): at dalvik.system.NativeStart.main(Native Method) E/AndroidRuntime( 4525): Caused by: java.lang.InstantiationException: can't instantiate class com.logic.testapp.SeekBarPreference; no empty construc tor E/AndroidRuntime( 4525): at java.lang.Class.newInstanceImpl(Native Method) E/AndroidRuntime( 4525): at java.lang.Class.newInstance(Class.java:1319) E/AndroidRuntime( 4525): at android.app.Instrumentation.newActivity(Instrumentation.java:1023) E/AndroidRuntime( 4525): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1870) E/AndroidRuntime( 4525): ... 11 more W/ActivityManager( 84): Force finishing activity com.ui.testapp/com.logic.testapp.SeekBarPreference W/ActivityManager( 84): Force finishing activity com.ui.testapp/.MainScreen I/WindowManager( 84): createSurface Window{41a90320 paused=false}: DRAW NOW PENDING W/ActivityManager( 84): Activity pause timeout for ActivityRecord{4104a848 com.ui.testapp/com.logic.testapp.SeekBarPreference} W/NetworkManagementSocketTagger( 84): setKernelCountSet(10021, 1) failed with errno -2 I/WindowManager( 84): createSurface Window{412bcc10 com.android.launcher/com.android.launcher2.Launcher paused=false}: DRAW NOW PENDING W/NetworkManagementSocketTagger( 84): setKernelCountSet(10045, 0) failed with errno -2 I/Process ( 4525): Sending signal. PID: 4525 SIG: 9 I/ActivityManager( 84): Process com.ui.testapp (pid 4525) has died. I/WindowManager( 84): WIN DEATH: Window{41a6c9c0 com.ui.testapp/com.ui.testapp.MainScreen paused=true}

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • MemoryStream and BitmapSource

    - by Vinjamuri
    I have a MemoryStream of 10K which was created from a bitmap of 2MB and compressed using JPEG. Since MemoryStream can’t directly be placed in System.Windows.Controls.Image for the GUI, I am using the following intermediate code to convert this back to BitmapImage and eventually System.Windows.Controls.Image. System.Windows.Controls.Image image = new System.Windows.Controls.Image(); BitmapImage imageSource = new BitmapImage(); s.SlideInformation.Thumbnail.Seek(0, SeekOrigin.Begin); imageSource.BeginInit(); imageSource.CacheOption = BitmapCacheOption.OnDemand; imageSource.CreateOptions = BitmapCreateOptions.DelayCreation; imageSource.StreamSource = s.SlideInformation.Thumbnail; imageSource.EndInit(); imageSource.Freeze(); image.Source = imageSource; ret = image.Source; My question is, when I store this in BitmapImage, the memory allocation is taking around 2MB. Is this expected? Is there any way to reduce the memory? I have around 300 thumbnails and this converstion takes around 600MB, which is very high. Appreciate your help!

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >