Search Results

Search found 6510 results on 261 pages for 'ms ftmg'.

Page 57/261 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to close and open access to SQL Server 2008 in Windows application?

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • C# and Excel best practices

    - by rlp
    I am doing a lot of MS Excel interop i C# (Visual Studio 2012) using Microsoft.Office.Interop.Excel. It requires a lot of tiresome manual code to include Excel formulas, doing formatting of text and numbers, and making graphs. I would like it very much if any of you have some input on how I do the task better. I have been looking at Visual Studio Tools for Office, but I am uncertain on its functions. I get it is required to make Excel add-ins, but does it help doing Excel automation? I have desperately been trying to find information on working with Excel in Visual Studio 2012 using C#. I did found some good but short tutorials. However I really would like a book an the subject to learn the field more in depth regarding functionality and best practices. Searching Amazon with my limited knowlegde only gives me book on VSTO using older versions of Visual Studio. I would not like to use VBA. My applications use Excel mainly for visualizing compiled from different sources. I also to data processing where Excel is not required. Futhermore, I can write C# but not VB.

    Read the article

  • Batch Inserts And Prepared Query Error

    - by ircmaxell
    Ok, so I need to populate a MS Access database table with results from a MySQL query. That's not hard at all. I've got the program written to where it copies a template .mdb file to a temp name and opens it via odbc. No problem so far. I've noticed that Access does not support batch inserting (VALUES (foo, bar), (second, query), (third query)). So that means I need to execute one query per row (there are potentially hundreds of thousands of rows). Initial performance tests show a rate of around 900 inserts/sec into Access. With our largest data sets, that could mean execution times of minutes (Which isn't the end of the world, but obviously the faster the better). So, I tried testing a prepared statement. But I keep getting an error (Warning: odbc_execute() [function.odbc-execute]: SQL error: [Microsoft][ODBC Microsoft Access Driver]COUNT field incorrect , SQL state 07001 in SQLExecute in D:\....php on line 30). Here's the code I'm using (Line 30 is odbc_execute): $sql = 'INSERT INTO table ([field0], [field1], [field2], [field3], [field4], [field5]) VALUES (?, ?, ?, ?, ?, ?)'; $stmt = odbc_prepare($conn, $sql); for ($i = 200001; $i < 300001; $i++) { $a = array($i, "Field1 $", "Field2 $i", "Field3 $i", "Field4 $i", $i); odbc_execute($stmt, $a); } So my question is two fold. First, is there any idea on why I'm getting that error (I've checked, and the number in the array matches the field list which matches the number of parameter ? markers)? And second, should I even bother with this or just use the straight INSERT statements? Like I said, time isn't critical, but if it's possible, I'd like to get that time as low as possible (Then again, I may be limited by disk throughput, since 900 operations/sec is high already)... Thanks

    Read the article

  • Managing modes in Windows application working directly with SQL Server 2008

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know it's not these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • Best practice to query data from MS SQL Server in C Sharp?

    - by Bruno
    What is the best way to query data from a MS SQL Server in C Sharp? I know that it is not good practice to have an SQL query in the code. Is the best way to create a stored procedure and call it from C Sharp with parameters? using (var conn = new SqlConnection(connStr)) using (var command = new SqlCommand("StoredProc", conn) { CommandType = CommandType.StoredProcedure }) { conn.Open(); command.ExecuteNonQuery(); conn.Close(); }

    Read the article

  • Bullet indents in PowerPoint 2007 compatibility mode via .NET interop issue

    - by L. Shaydariv
    Hello. I've got a really difficult bug and I can't see the fix. The subject drives me insane for real for a long time. Let's consider the following scenario: 1) There is a PowerPoint 2003 presentation. It contains the only slide and the only shape, but the shape contains a text frame including a bulleted list with a random textual representation structure. 2) There is a requirement to get bullet indents for every bulletted paragraph using PowerPoint 2007. I can satisfy the requirement opening the presentation in the compatibility mode and applying the following VBA script: With ActivePresentation Dim sl As Slide: Set sl = .Slides(1) Dim sh As Shape: Set sh = sl.Shapes(1) Dim i As Integer For i = 1 To sh.TextFrame.TextRange.Paragraphs.Count Dim para As TextRange: Set para = sh.TextFrame.TextRange.Paragraphs(i, 1) Debug.Print para.Text; para.indentLevel, sh.TextFrame.Ruler.Levels(para.indentLevel).FirstMargin Next i End With that produces the following output: A 1 0 B 1 0 C 2 24 D 3 60 E 5 132 Obviously, everything is perfect indeed: it has shown the proper list item text, list item level and its bullet indent. But I can't see the way of how I can reach the same result using C#. Let's add a COM-reference to Microsoft.Office.Interop.PowerPoint 2.9.0.0 (taken from MSPPT.OLB, MS Office 12): // presentation = ...("presentation.ppt")... // a PowerPoint 2003 presentation Slide slide = presentation.Slides[1]; Shape shape = slide.Shapes[1]; for (int i = 1; i<=shape.TextFrame.TextRange.Paragraphs(-1, -1).Count; i++) { TextRange paragraph = shape.TextFrame.TextRange.Paragraphs(i, 1); Console.WriteLine("{0} {1} {2}", paragraph.Text, paragraph.IndentLevel, shape.TextFrame.Ruler.Levels[paragraph.IndentLevel].FirstMargin); } Oh, man... What's it? I've got problems here. First, the paragraph.Text value is trimmed until the '\r' character is found (however paragraph.Text[0] really returns the first character O_o). But it's ok, I can shut my eyes to this. But... But, second, I can't understand why the first margins are always zero and it does not matter which level they belong to. They are always zero in the compatibility mode... It's hard to believe it... :) So is there any way to fix it or just to find a workaround? I'd like to accept any help regarding to the solution of the subject. I can't even find any article related to the issue. :( Probably you have ever been face to face with it... Or is it just a bug with no fix and must it be reported to Microsoft? Thanks you.

    Read the article

  • Populating data in multiple cascading dropdown boxes in Access 2007

    - by miCRoSCoPiC_eaRthLinG
    Hello all, I've been assigned the task to design a temporary customer tracking system in MS Access 2007 (sheeeesh!). The tables and relationships have all been setup successfully. But I'm running into a minor problem while trying to design the data entry form for one table... Here's a bit of explanation first. The screen contains 3 dropdown boxes (apart from other fields). 1st dropdown The first dropdown (cboMarket) represents the Market lets users select between 2 options: Domestic International Since the first dropdown contains only 2 items I didn't bother making a table for it. I added them as pre-defined list items. 2nd dropdown Once the user makes a selection in this one, the second dropdown (cboLeadCategory) loads up a list of Lead Categories, namely, Fairs & Exhibitions, Agents, Press Ads, Online Ads etc. Different sets of lead categories are utilized for the 2 markets. Hence this box is dependent on the 1st one. Structure of the bound table, named Lead_Cateogries for the 2nd combo is: ID Autonumber Lead_Type TEXT <- actually a list that takes up Domestic or International Lead_Category_Name TEXT 3rd dropdown And based on the choice of category in the 2nd one, the third one (cboLeadSource) is supposed to display a pre-defined set of lead sources belonging to the particular category. Table is named Lead_Sources and the structure is: ID Autonumber Lead_Category NUMBER <- related to ID of Lead Categories table Lead_Source TEXT When I make the selection in the 1st dropdown, the AfterUpdate event of the combo is called, which instructs the 2nd dropdown to load contents: Private Sub cboMarket_AfterUpdate() Me![cboLead_Category].Requery End Sub The Row Source of the 2nd combo contains a query: SELECT Lead_Categories.ID, Lead_Categories.Lead_Category_Name FROM Lead_Categories WHERE Lead_Categories.Lead_Type=[cboMarket] ORDER BY Lead_Categories.Lead_Category_Name; The AfterUpdate event of 2nd combo is: Private Sub cboLeadCategory_AfterUpdate() Me![cboLeadSource].Requery End Sub The Row Source of 3rd combo contains: SELECT Leads_Sources.ID, Leads_Sources.Lead_Source FROM Leads_Sources WHERE [Lead_Sources].[Lead_Category]=[Lead_Categories].[ID] ORDER BY Leads_Sources.Lead_Source; Problem When I select Market type from cboMarket, the 2nd combo cboLeadCategory loads up the appropriate Categories without a hitch. But when I select a particular Category from it, instead of the 3rd combo loading the lead source names, a modal dialog is displayed asking me to Enter a Parameter. When I enter anything into this prompt (valid or invalid data), I get yet another prompt: Why is this happening? Why isn't the 3rd box loading the source names as desired. Can any one please shed some light on where I am going wrong? Thanks, m^e

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Please help me in installing Ubuntu 12.04.1. and Can it run MS Off. Prof. Plus 2007?

    - by ramandeep
    I want to install Ubuntu with MS Office Professional Plus 2007 running. I want to format my hard drive with slow speed as it happens when we install xp. The all cleaning with no traces left. For that what'll be easy to do: 1) Formatting entire hard drive OR 2) Formatting only that partition on which I want to install Ubuntu? I should be sure that I can run MS Office Professional Plus 2007 on Ubuntu. I have ubuntu-12.04.1-desktop-i386.iso burned on a CD and have MS Office Prof. Plus 2007 in this form - http://i.imgur.com/qAhP8.jpg I also want to know when I update Ubuntu how much MB will it be?

    Read the article

  • Difference between "traceroute" and "traceroute -U"

    - by AndiDog
    The manpage of traceroute says that the "-U" parameter (UDP probing) is the default, but I'm getting different results every time. With "-U": traceroute -U www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 13 rap-vl165-te3-2-jussieu-rtr-021.noc.renater.fr (193.51.181.101) 59.445 ms 56.924 ms 56.651 ms [...] 18 * paris1web.univ-paris1.fr (193.55.96.121) 23.797 ms 23.603 ms but the normal traceroute gives me another result (never reaches the final node) - it's either "!X" or just exits after the maximum of 30 hops: traceroute www.univ-paris1.fr traceroute to www.univ-paris1.fr (193.55.96.121), 30 hops max, 60 byte packets [...] 11 te1-1-paris1-rtr-021.noc.renater.fr (193.51.189.38) 28.147 ms 28.250 ms 28.538 ms [... non-responding nodes ...] 28 site-1.03-jussieu.rap.prd.fr (195.221.126.58) 85.941 ms !X * * Note: I tried this very often and always get the same results. The path in my local network is always the same. So what does the "-U" parameter actually change here? I'm especially interested what the reason for "!X" could be (communication administratively prohibited). EDIT: If that helps, paris-traceroute gives me the following for the last hop: 14 P(1, 6) site-1.03-jussieu.rap.prd.fr (195.221.126.58) 34.938 ms !5 !T2 which means that node discards the packet with TTL=2 and returns an unknown message (not "destination unreachable" or the like).

    Read the article

  • Need help with setting up MS-SQL on EC2.

    - by Hareem Haque
    I have a large MS-SQL database that i need to send to the aws cloud. The issue is how do i persist my sql data and how to setup MS-SQL cluster using windows AMI. The real issue is that for replication i need to use the private ip's of the instances. However, these ip's are always dynamic and will change on server launch. Any ideas on how i can get rid of this problem. I really appreciate your help Best Regards Hareem Haque

    Read the article

  • What are the alternative simple softwares for MS Paint?

    - by hkBattousai
    MS Paint is good but it lacks quality on some of its tools. I want to try its alternative software. Can you please give me a list of its alternatives. I'm looking for simple software as MS Paint is. Please don't suggest me complicated software like Gimp and Photoshop. I have already tried Paint.NET, but it is too dependent on Microsoft .NET Framework, so it is very problematic to install in the first place.

    Read the article

  • Using Outlook Web Access; MS Office attachments are compressed

    - by ColoBob
    I have MS Office 2007 installed. When am using Outlook Web Access and I receive MS Office 2007 attachments from some colleagues, OWA requires me to save the file, rather than it opening with a double-click. When I "Save Target As..." it gives only the option to save as "Compressed (Zip) File" even though the filename appears as "filename.xlsx" Then, when I open the saved folder, the filename does not appear anywhere. Ideas?

    Read the article

  • Losing jQuery functionality after postback

    - by David Lozzi
    I have seen a TON of people reporting this issue online, but no actual solutions. I'm not using AJAX or updatepanels, just a dropdown that posts back on selected index change. My HTML is <div id="myList"> <table id="ctl00_PlaceHolderMain_dlFields" cellspacing="0" border="0" style="border-collapse:collapse;"> <tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl00_lblDestinationField">Body</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl00$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl00_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl00$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl00_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr><tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl01_lblDestinationField">Expires</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl01$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl01_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl01$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl01_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr><tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl02_lblDestinationField">Title</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl02$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl02_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl02$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl02_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr> </table></div> The above Div tag is static, and the table is generated from a DataList object. On postback the datalist reloads using a new dataset, for example <div id="myList"> <table id="ctl00_PlaceHolderMain_dlFields" cellspacing="0" border="0" style="border-collapse:collapse;"> <tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl00_lblDestinationField">Notes</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl00$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl00_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl00$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl00_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr><tr> <td> <tr> <td class="ms-formlabel" style="width: 175px; padding-left: 10px"> <span id="ctl00_PlaceHolderMain_dlFields_ctl01_lblDestinationField">URL</span> </td> <td class="ms-formbody" style="width: 485px"> <input name="ctl00$PlaceHolderMain$dlFields$ctl01$txtSource" type="text" id="ctl00_PlaceHolderMain_dlFields_ctl01_txtSource" class="ms-input" style="width:230px" /> <select name="ctl00$PlaceHolderMain$dlFields$ctl01$ddlSourceFields" id="ctl00_PlaceHolderMain_dlFields_ctl01_ddlSourceFields" class="ms-input"> <option value="Some Field Name 1">Some Field Name 1</option> <option value="Some Field Name 2">Some Field Name 2</option> <option value="Some Field Name 3">Some Field Name 3</option> <option value="Some Field Name 4">Some Field Name 4</option> </select> <a href="#" id="appendSelect">append</a> </td> </tr> </td> </tr> </table></div> After the postback and the datalist is reloaded, my JQuery doesn't work anymore. No errors, nothing. I don't see any actual changes in the objects in the HTML that should cause this. How do I fix this? Any workarounds or bandaides I can apply? My JQuery is below <script type='text/javascript'> $(document).ready(function () { $('#myList a').live("click", function () { var $selectValue = $(this).siblings('select').val(); var $thatInput = $(this).siblings('input'); var val = $thatInput.val() + ' |[' + $selectValue + ']|'; $thatInput.val(jQuery.trim(val)); }) }); </script> Thanks!!

    Read the article

  • Is it possible to make a MS-SQL Scalar function do this?

    - by Hokken
    I have a 3rd party application that can call a MS-SQL scalar function to return some summary information from a table. I was able to return the summary values with a table-valued function, but the application won't utilize table-valued functions, so I'm kind of stuck. Here's a sample from the table: trackingNumber, projTaskAward, expenditureType, amount 1122, 12345-67-89, Supplies, 100 1122, 12345-67-89, Supplies, 150 1122, 12345-67-89, Supplies, 250 1122, 12345-67-89, Misc, 50 1122, 12345-67-89, Misc, 100 1122, 98765-43-21, General, 200 1122, 98765-43-21, Conference, 500 1122, 98765-43-21, Misc, 300 1122, 98765-43-21, Misc, 100 1122, 98765-43-21, Misc, 100 I want to summarize the amounts by projTaskAward & expenditureType, based on the trackingNumber. Here is the output I'm looking for: Proj/Task/Award: 12345-67-89 Expenditure Type: Supplies Total: 500 Proj/Task/Award: 12345-67-89 Expenditure Type: Misc Total: 150 Proj/Task/Award: 98765-43-21 Expenditure Type: General Total: 200 Proj/Task/Award: 98765-43-21 Expenditure Type: Conference Total: 500 Proj/Task/Award: 98765-43-21 Expenditure Type: Misc Total: 500 I'd appreciate any help anyone can give in steering me in the right direction.

    Read the article

  • Why is it so hard to find anything on MS site?

    - by Amir Rezaei
    I have always had this question in my mind and I would really be happy to get an explanation for this. Is it only me or do you also feel the same way that it's hard to find anything on MS site. For example, every time I need to download .NET framework I have to Google it. You never know what you can download, no category for downloads. You are simply left to a search field. You never know if you downloaded the latest version of the file. The tragically true is that you have to rely on their competitor Google to find anything on their site. I know that they are a big company. But is it really that hard to have an organized way to publish information?

    Read the article

  • Wanting to learn .NET, can I benefit from the MS discounts?

    - by Chris
    I quit high-school a couple of years ago and now I'm studying to get my diploma at a special course the EU created for people in my situation. This course is basically identical to normal high-school the only difference being fewer hours due to the fact that a lot of us have jobs(not me). I would like to learn windows development and .NET and I've seen around that they offer students some great discounts and even some free tools such as Visual Studio and Windows 7. I'm learning Java on Ubuntu at the moment but I'd like to move to .NET but can't afford Windows or other MS-related tools since I don't have a job and no real income. Can someone in my situation benefit from their offers?

    Read the article

  • How can I associate .doc files to MS Word 2010 using the same .desktop file as launcher?

    - by nastys
    I'm trying to associate .doc and .docx files to MS Word 2010 using the same .desktop file as Unity dash and launcher, so I can use the Word icon in launcher. I tried: [Desktop Entry] Name=Microsoft Word 2010 Exec=env WINEPREFIX="/home/nastys/.mso2010" wine "C:/Program Files/Microsoft Office/Office14/WINWORD.exe" %f Type=Application StartupNotify=true Comment=Create and edit professional-looking documents such as letters, papers, reports, and booklets by using Microsoft Word. Icon=29F5_WINWORD.0 StartupWMClass=WINWORD.EXE MimeType=application/msword; application/vnd.openxmlformats-officedocument.wordprocessingml.document; Using this .desktop file I can launch Word with its icon in Unity launcher, but if I associate .doc files to the same file Word will launch, but it won't open the .doc file. If I associate .doc files to any .desktop file generated by Wine it will launch Word, but it will use Wine icon.

    Read the article

  • WpfToolkit DataGrid does not work in Windows Phone 7

    - by Igor Zevaka
    I am trying to use WpfToolkit DataGrid in Windows Phone 7 project (Silverligt 4) and it's not working. Here is the XAML: <UserControl x:Class="SilverlightControls.Grid" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:wtk="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" d:DesignHeight="480" d:DesignWidth="480"> <Grid x:Name="LayoutRoot" Background="#FF1F1F1F" Width="960"> <Grid x:Name="TitleGrid"> <TextBlock Text="{Binding Title}" Style="{StaticResource PhoneTextPageTitle2Style}"/> </Grid> <wtk:DataGrid> </wtk:DataGrid> </Grid> </UserControl> The project compiles fine but crashes at runtime trying to load this control. The best clue I got so far is from Visual Studio Designer. Once I add wtk:DataGrid to the control the visual designer does not load and below is the exception it displays. Could it be that WpfToolkit relies on PresentationFramework.dll and it's not available in SL4? System.Reflection.Adds.UnresolvedAssemblyException Type universe cannot resolve assembly: PresentationFramework, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35. at System.Reflection.Adds.AssemblyProxy.GetResolvedAssembly() at System.Reflection.Adds.AssemblyProxy.get_FullName() at Microsoft.Windows.Design.Metadata.ReflectionMetadataContext.PrepareAttributes(Reflectable`1 reflectableAssembly) at Microsoft.Windows.Design.Metadata.ReflectionMetadataContext.PrepareAttributes(Reflectable`1 reflectableType) at MS.Internal.Metadata.ClrType.GetAttributes[T](ReflectionMetadataContext context, IReflectable`1 member, ITypeMetadata attributeType, Boolean merge, AttributeMergeCache& cache) at MS.Internal.Metadata.ClrMember`1.GetLocalAttributes(ITypeMetadata attributeType) at MS.Internal.Design.Metadata.Xaml.XamlType.GetSpecialProperty(Int32 idx, PropertyIdentifier pid) at MS.Internal.Design.Metadata.Xaml.XamlType.get_ContentProperty() at Microsoft.Windows.Design.Metadata.Xaml.XamlExtensionImplementations.GetContentProperty(ITypeMetadata sourceType) at Microsoft.Windows.Design.Metadata.Xaml.XamlExtensions.GetContentProperty(ITypeMetadata source) at MS.Internal.Design.Metadata.ReflectionTypeNode.get_ContentProperty() at MS.Internal.Design.Markup.XmlElement.CalcChildWhitespaceImportant(XamlElement element) at MS.Internal.Design.Markup.XmlElement.ConvertChildrenToXaml(XamlElement result, PrefixScope scope, IParseContext context, IMarkupSourceProvider provider, Boolean childrenAsString) at MS.Internal.Design.Markup.XmlElement.ConvertToXaml(XamlElement parent, PrefixScope parentScope, IParseContext context, IMarkupSourceProvider provider) at MS.Internal.Design.Markup.XmlElement.ConvertChildrenToXaml(XamlElement result, PrefixScope scope, IParseContext context, IMarkupSourceProvider provider, Boolean childrenAsString) at MS.Internal.Design.Markup.XmlElement.ConvertToXaml(XamlElement parent, PrefixScope parentScope, IParseContext context, IMarkupSourceProvider provider) at MS.Internal.Design.DocumentModel.DocumentTrees.Markup.XamlSourceDocument.ParseElementFromSkeleton(XamlParseContext context, SkeletonNode node, XamlElement parent, Boolean fullElement) at MS.Internal.Design.DocumentModel.DocumentTrees.Markup.XamlSourceDocument.UpdateSkeleton(IDamageListener listener) at Microsoft.Windows.Design.DocumentModel.Trees.MarkupDocumentTreeManager.Update() at Microsoft.Windows.Design.DocumentModel.MarkupProducer.Update() at Microsoft.Windows.Design.DocumentModel.MarkupProducer.HandleMessage(DocumentTreeCoordinator sender, MessageKey key, MessageArguments args) at Microsoft.Windows.Design.DocumentModel.MarkupProducer.Microsoft.Windows.Design.DocumentModel.IDocumentTreeConsumer.HandleMessage(DocumentTreeCoordinator sender, MessageKey key, MessageArguments args) at Microsoft.Windows.Design.DocumentModel.DocumentTreeCoordinator.SendMessage[T](MessageKey`1 key, T args, Boolean isPrivateMessage) at Microsoft.Windows.Design.DocumentModel.DocumentTreeCoordinator.QueuedMessage`1.Microsoft.Windows.Design.DocumentModel.IQueuedMessage.Invoke() at Microsoft.Windows.Design.DocumentModel.DocumentTreeCoordinator.ProcessQueuedMessages(Object state) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler)

    Read the article

  • Installing VSTO 4.0 Causes VSTO 3.0 Addin to quit working

    - by Jacob Adams
    I just installed Visual Studio 2010 yesterday. As part of that I installed VSTO 4.0. Now when I run any Office application, my VSTO 3.0 addins fail to load. The error in the event log is Customization URI: file:///H:/PathToMyAddin/MyAddin.vsto Exception: Customization does not have the permissions required to create an application domain. ***** Exception Text ******* Microsoft.VisualStudio.Tools.Applications.Runtime.CannotCreateCustomizationDomainException: Customization does not have the permissions required to create an application domain. --- System.Security.SecurityException: Customized functionality in this application will not work because the administrator has listed file:///H:/PathToMyAddin/MyAddin.vsto as untrusted. Contact your administrator for further assistance. at Microsoft.VisualStudio.Tools.Office.Runtime.RuntimeUtilities.VerifySolutionUri(Uri uri) at Microsoft.VisualStudio.Tools.Office.Runtime.DomainCreator.CreateCustomizationDomainInternal(String solutionLocation, String manifestName, String documentName, Boolean showUIDuringDeployment, IntPtr hostServiceProvider, IntPtr& executor) The Zone of the assembly that failed was: MyComputer It seems like like maybe this is due to it trying to load different version of .NET is the same process/AppDomain. However the error would indicate it's some sort of permissions issue.

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >