Search Results

Search found 6510 results on 261 pages for 'umesha ms'.

Page 127/261 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Codeplex/Sourceforge for internal use

    - by Josh
    I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it. Requirements: Open Source or Free Locally Deployable Has the same types of features found in Sourceforge / Codeplex Issue/Feature Tracking Community Interaction (ie. Voting, Roles, etc.) SCM Integration (Optional) .NET/Windows Friendly (Optional) Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place. UPDATE: So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?

    Read the article

  • JavaScript: Achieving precise animation end values?

    - by bobthabuilda
    I'm currently trying to write my own JavaScript library. I'm in the middle of writing an animation callback, but I'm having trouble getting precise end values, especially when animation duration times are smaller. Right now, I'm only targeting positional animation (left, top, right, bottom). When my animations complete, they end up having an error margin of 5px~ on faster animations, and 0.5px~ on animations 1000+ ms or greater. Here's the bulk of the callback, with notes following. var current = parseFloat( this[0].style[prop] || 0 ) // If our target value is greater than the current , gt = !!( value > current ) , delta = ( Math.abs(current - value) / (duration / 13) ) * (gt ? 1 : -1) , elem = this[0] , anim = setInterval( function(){ elem.style[prop] = ( current + delta ) + 'px'; current = parseFloat( elem.style[prop] ); if ( gt && current >= value || !gt && current <= value ) clearInterval( anim ); }, 13 ); this[0] and elem both reference the target DOM element. prop references the property to animate, left, top, bottom, right, etc. current is the current value of the DOM element's property. value is the desired value to animate to. duration is the specified duration (in ms) that the animation should last. 13 is the setInterval delay (which should roughly be the absolute minimal for all browsers). gt is a var that is true if value exceeds the initial current, else it is false. How can I resolve the error margin?

    Read the article

  • Concurrency and Coordination Runtime (CCR) Learning Resources

    - by Harry
    I have recently been learning the in's and out's of the Concurrency and Coordination Runtime (CCR). Finding good learning resources for this relatively new technology has been quite difficult. (A quick google search brings up "Creedence Clearwater Revival" as the top result!) Some of the resources I have found: Free e-book chapter from WROX on the Robotics Developer Studio Good Article/post on InfoQ Robotic's Member blog Very active MSDN CCR Forum - Got plenty of help from here! Great MSDN Magazine by Jeffrey Richter Official CCR User Guide - Didn't find this very helpful Great blogging series on CCR iodyner CCR Related Blog - Update: Moved to here Eight or so Videos on Channel9.msdn.com CCR Patterns page on MS Robotics Studio - I haven't read this yet 4 x CCR Questions on Stackoverflow - Most of the questions have been Mine! LOL CCR and DSS toolkit has now been released to MSDN Members Do you have any good learning resources for the CCR? I really hope that Microsoft will publish more material, so far it has been too Robotics specific. I believe that MS needs to acknowledge that most people are using the CCR in issolation from the DSS and Robotics Studio. Update The Mix 2010 conference had a presentation by Myspace about how they have used the CCR framework in their middle tier. They also open sourced the code base. MySpace DataRelay Mix Video Presentation

    Read the article

  • Java: library that does nice formatted log outputs

    - by WizardOfOdds
    I cannot find back a library that allowed to format log output statements in a much nicer way than what is usually seen. One of the feature I remember is that it could 'offset' the log message depending on the 'nestedness' of where the log statement was occuring. That is, instead of this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java| 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java| 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 459 | Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java| 789 | Another debug message It would shows something like this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java | 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java | 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 459 | |__ Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java | 789 | |__ Another debug message This is an example I just made up (VeryLongCamelCaseClassNamesNotMine). But I remember seeing such cleanly formatted log output and they were really much nicer than anything I had seen before and, in addition to being just plain nicer, they were also easier to read for they reproduced some of the logical organization of the code. Yet I cannot find anymore what that library was. I'm pretty sure it was fully compatible with log4j or sl4j.

    Read the article

  • Perl Regex - Condensing groups of find/replace

    - by brydgesk
    I'm using Perl to perform some file cleansing, and am running into some performance issues. One of the major parts of my code involves standardizing name fields. I have several sections that look like this: sub substitute_titles { my ($inStr) = @_; ${$inStr} =~ s/ PHD./ PHD /; ${$inStr} =~ s/ P H D / PHD /; ${$inStr} =~ s/ PROF./ PROF /; ${$inStr} =~ s/ P R O F / PROF /; ${$inStr} =~ s/ DR./ DR /; ${$inStr} =~ s/ D.R./ DR /; ${$inStr} =~ s/ HON./ HON /; ${$inStr} =~ s/ H O N / HON /; ${$inStr} =~ s/ MR./ MR /; ${$inStr} =~ s/ MRS./ MRS /; ${$inStr} =~ s/ M R S / MRS /; ${$inStr} =~ s/ MS./ MS /; ${$inStr} =~ s/ MISS./ MISS /; } I'm passing by reference to try and get at least a little speed, but I fear that running so many (literally hundreds) of specific string replaces on tens of thousands (likely hundreds of thousands eventually) of records is going to hurt the performance. Is there a better way to implement this kind of logic than what I'm doing currently? Thanks Edit: Quick note, not all the replace functions are just removing periods and spaces. There are string deletions, soundex groups, etc.

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • PDO Database Connections Problem

    - by Metropolis
    Hey Everyone, Over a year ago I created my own database classes which use PDO, and handle all preparing, executing, and closing connections. These classes have been working great up until now. There are two different database severs I am grabbing from, MySQL, and MS SQL Express. I am retrieving an employee id from the MySQL server and using it to get that employees information from the MS SQL server. There are about 11k records coming from the MySQL server and my program is only making it through 1200 before crashing with an error like the following. Connection failed (odbc:Driver=FreeTDS;Servername=MSSQLExpress;Database=SMDINC) Class (PDOException) SQLSTATE[08001] SQLDriverConnect: 0 [unixODBC][FreeTDS][SQL Server]Unable to connect to data source It seems like the program is not able to connect to the data source, but it is running the exact same query about 30 times before this and having no problem. Also, I have thoroughly checked all of the data coming into the query and it all looks fine. I believe the issue may be that there are to many connections being created, but I have tried to close all connections in many different places, and nothing seems to be fixing the problem. Any debugging help, or suggestions would be appreciated! Craig Metrolis

    Read the article

  • WPF - Why doesn't Microsoft supply a decent set of most-used controls ?

    - by IUsedToBeAPygmy
    I've been playing with WPF for some months now, and I quite like it. But one of the things I don't get is why MS doesn't put a little more effort in helping developers by supplying basic controls, and I need to get this off my chest :) For example, I figure most applications somewhere will need to let you edit some properties - for configuration or whatever. What would be the most used types in a proprety-grid editor ? text numbers (byte, float/double, int, etc) colors ....etc. So why isn't there even something as simple as a control to edit numbers ? Like a generic NumericUpDown control that allows you to type in numbers (no text, no pasting invalid input) or spin them up/down according to some given rules (decimal, floating point, min/maxvalue) ? Why isn't there a generic colorpicker, so people get the same user-experience in every application ? Why isn't there a standard implementation of a SearchTextBox, a BreadCrumb-control, or all these other standard control types users have gotten accustomed to the last 10 years ? (..but at least they DID have the time to implement a generic splashscreen - because everyone knows that greatly increases user-productivity....) The well-known ideal is always to give people the same user-experience over different applications. So even if some of those controls would be easy to make - it would be preferred to have one version over different applications. I see people all over the internet trying to do the same stuff over and over again. Okay, so MS started a WPF Toolkit project on Codeplex that tries to implement some controls, but only did so half-heartedly and is completely dead by now (last update of the roadmap dates back to Mar 21 2009). The result of this is that a lot of people starting a WPF-project end up spending a lot of time on trying to figure out how to create some generic controls and get really frustrated. Wasn't the mantra "Developers, developers, developers!" ..? /Rant

    Read the article

  • Change the content type of a pop up window.

    - by Oscar Reyes
    This question brought a new one: I have a html page and I need it to change the content type when the user press "save" button so the browser prompt to save the file to disk I've been doing this in the server side to offer "excel" versions of the page ( which is basically a html table ) <c:if test="${page.asExcelAction}"> <% response.setContentType("application/vnd.ms-excel"); %> What I'm trying to do now is to do the same, but in the client side with javacript but I can't manage to do it so. This is what I've got so far: <html> <head> <script> function saveAs(){ var sMarkup = document.getElementById('content').innerHTML; //var oNewDoc = document.open('application/vnd.ms-excel'); var oNewDoc = document.open('text/html'); oNewDoc.write( sMarkup ); oNewDoc.close(); } </script> </head> <body> <div id='content'> <table> <tr> <td>Stack</td> <td>Overflow</td> </tr> </table> </div> <input type="button" value="Save as" onClick="saveAs()"/> </body> </html>

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • Convert asp.net webforms logic to asp.net MVC

    - by gmcalab
    I had this code in an old asp.net webforms app to take a MemoryStream and pass it as the Response showing a PDF as the response. I am now working with an asp.net MVC application and looking to do this this same thing, but how should I go about showing the MemoryStream as PDF using MVC? Here's my asp.net webforms code: private void ShowPDF(MemoryStream ms) { try { //get byte array of pdf in memory byte[] fileArray = ms.ToArray(); //send file to the user Page.Response.Cache.SetCacheability(HttpCacheability.NoCache); Page.Response.Buffer = true; Response.Clear(); Response.ClearContent(); Response.ClearHeaders(); Response.Charset = string.Empty; Response.ContentType = "application/pdf"; Response.AddHeader("content-length", fileArray.Length.ToString()); Response.AddHeader("Content-Disposition", "attachment;filename=TID.pdf;"); Response.BinaryWrite(fileArray); Response.Flush(); Response.Close(); } catch { // and boom goes the dynamite... } }

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • ant ftp task "Could not date test remote file"

    - by avok00
    Hi guys! I am using Ant ftp task to deploy my project files to a remote app server. Ant is not able to detect the date of the remote file and it re-uploads all files every time. When I start Ant in debug mode it says: [ftp] checking date for mailer.war [ftp] Could not date test remote file: mailer.war assuming out of date. The remote server is MS FTP (Windows Vista version) Ant version is 1.8.2; I use commons-net-2.2 and jakarta-oro-2.0.8 (could not find newer version) My ant task looks like this <!-- Deploy new and changed files --> <target name="deploy" depends="package" description="Deploy new and changed files"> <ftp server="localhost" userid="" password="" action="send" depends="yes" passive="true" systemTypeKey="WINDOWS" serverTimeZoneConfig="Europe/Sofia" defaultDateFormatConfig="MMM dd yyyy" recentDateFormatConfig="MMM dd HH:mm" binary="true" retriesAllowed="3" verbose="true"> <fileset dir="${webapp.artefacts.path}"/> </ftp> </target> I read an article here: Ant:The definitive guide that says I need a version of jakarta oro AFTER 2.0.8 to talk to MS FTP servers, but I was not able to find such version. Jakarta oro site - http://jakarta.apache.org/oro/ says the oro project is retired as of 2010, but their latest distribution is from 2003! Please, can anyone help me with this? Any solution or any alternatives to the Ant ftp task? Thanks!

    Read the article

  • Microsoft Training in .NET

    - by JD
    Hey guys, A quick question about training. My company are offering to put me through a 5 day course in .NET programming with C#, on the proviso that I work for them for a minimum 12 month period immediately after the training has been completed. IF I don't work for them for 12months I will be expected to pay them back teh cost of the training, minus the amount of months which I have stayed at the company (e.g. stay 6 months and only pay back half). The course will be $3,300 AUD - and will have no qualifications. I have been programming in C# for about 6 months, and in PHP for about 10 years - so I am getting the hang of .NET slowly but surely. Is it worth doing this? Or would I be best advised to learn from reading the MS books from Amazon for $40 and then sitting the 2 exams to get MS certified off my own back? The company are not willing to pay for my certification as well as 'they are not training me to get certified' and hence more employable... Thoughts? How do other companies deal with training? And stop people from taking off after becoming certified?

    Read the article

  • Have any software engineers gotten math degrees later in their careers?

    - by vin
    I have a bachelors in computer science and worked the last 12 years as a software engineer. I'm bored with doing general development work, so I want to specialize. I'm thinking about getting a master's degree in math so I can build math models and write algorithms to implement them. I'm unsure what type of work I'd do (financial, gaming, graphics, science, research, etc) but I'm open minded. I would need to refresh my undergrad math skills (which are old and faded), but I loved algebra and calculus. I've been working with couple statisticians so I've been finding myself more interested in statistics. Since I'm a parent supporting a household, I would have to continue working while studying. Have any software engineers taken this route? (Specifically, going from BS in comp sci to MS in math.) If so, what advice do you have for coursework, financing, and getting a job that combines programming with advanced math? How abundant are these kinds of jobs? I'm not sure where one starts. Also, how do you hop from a BS to an MS in a different subject?

    Read the article

  • How to force a DIV block to extend to the bottom of a page, even if it has no content?

    - by Sir Psycho
    Hi, I'm trying to get the content div to stretch all the way to the bottom of the page but so far, its only stretching if theres actual content to display. The reason I want to do this is so if there isn't much content to display, the vertical border still goes all the way down. Here is my code <body> <form id="form1"> <div id="header"> <a title="Home" href="index.html" /> </div> <div id="menuwrapper"> <div id="menu"> </div> </div> <div id="content"> </div> and my CSS body { font-family: Trebuchet MS, Verdana, MS Sans Serif; font-size:0.9em; margin:0; padding:0; } div#header { width: 100%; height: 100px; } #header a { background-position: 100px 30px; background: transparent url(site-style-images/sitelogo.jpg) no-repeat fixed 100px 30px; height: 80px; display: block; } #header, #menuwrapper { background-repeat: repeat; background-image: url(site-style-images/darkblue_background_color.jpg); } #menu #menuwrapper { height:25px; } div#menuwrapper { width:100% } #menu, #content { width:1024px; margin: 0 auto; } div#menu { height: 25px; background-color:#50657a; } Thanks for taking a looksi

    Read the article

  • AJAX Partial Rendering issues for the default page in IIS 7 when using custom http module

    - by WiseGuyEh
    The problem When I try to make a AJAX partial update request (using the UpdatePanel control) from the default page of an IIS7 web site, it fails- instead of returning the html to be updated, it returns the entire page, which then causes the MS AJAX Javascript to throw a parsing shit-fit. The suspected cause I have narrowed the cause down to two issues- making an AJAX request to the default page when I have a certain custom http module registered. A partial rendering request to http://localhost will fail, but a partial rendering request to http://localhost/default.aspx will work fine. Also, If i remove the following line in my custom HttpModule: _application.PreRequestHandlerExecute += OnPreRequestHandlerExecute; The AJAX partial render will work correctly. Wierd huh? Another wierd thing... If I look at trace.axd, I can see that when a partial rendering request fails, two POST requests are logged for the one partial rendering request- one where the default.aspx page executes successfully (trace information such as page_load is logged) but no content is produced and a second that doesn't seem to actually execute (no trace information is logged) but produces content (HTTP_CONTENT_LENGTH is greater than 0). Please help! If anyone with a good knowledge of HTTP modules or the MS AJAX Http module could explain why this is occuring I would be very grateful. As it is, the obvious work arround is to just redirect to default.aspx if the request url is "/" but I would really like to understand why this is occurring.

    Read the article

  • C# Attribute XmlIgnore and XamlWriter class - XmlIgnore not working

    - by Horst Walter
    I have a class, containing a property Brush MyBrush marked as [XmlIgnore]. Nevertheless it is serialized in the stream causing trouble when trying to read via XamlReader. I did some tests, e.g. when changing the visibility (to internal) of the Property it is gone in the stream. Unfortunately I cannot do this in my particular scenario. Did anybody have the same issue and? Do you see any way to work around this? Remark: C# 4.0 as far I can tell This is a method from my Unit Test where I do test the XamlSerialization: // buffer to a StringBuilder StringBuilder sb = new StringBuilder(); XmlWriter writer = XmlWriter.Create(sb, settings); XamlDesignerSerializationManager manager = new XamlDesignerSerializationManager(writer) {XamlWriterMode = XamlWriterMode.Expression}; XamlWriter.Save(testObject, manager); xml = sb.ToString(); Assert.IsTrue(!String.IsNullOrEmpty(xml) && !String.IsNullOrEmpty(xml), "Xaml Serialization failed for " + testObject.GetType() + " no xml string available"); xml = sb.ToString(); MemoryStream ms = xml.StringToStream(); object root = XamlReader.Load(ms); Assert.IsTrue(root != null, "After reading from MemoryStream no result for Xaml Serialization"); In one of my classes I use the Property Brush. In the above code this Unit Tests fails because of a Brush object not serializable is the value. When I remove the Setter (as below, the Unit Test passes. Using the XmlWriter (basically same test as above) it works. In the StringBuffer sb I can see that Property Brush is serialized when the Setter is there and not when removed (most likely another check ignoring the Property because of no setter). Other Properties with [XmlIgnore] are ignored as intended. [XmlIgnore] public Brush MyBrush { get { ..... } // removed because of problem with Serialization // set { ... } }

    Read the article

  • C# DynamicMethod prelink

    - by soywiz
    I'm the author of a psp emulator made in C#. I'm generating lots of "DynamicMethod" using ILGenerator. I'm converting assembly code into an AST, and then generating IL code and building that DynamicMethod. I'm doing this in another thread, so I can generate new methods while the program is executing others so it can run smoothly. My problem is that the native code generation is lazy, so the machine code is generated when the function is called, not when the IL is generated. So it generates in the program executing thread, native code generation is prettly slow as it is the asm-ast-il step. I have tried the Marshal.Prelink method that it is suposed to generate the machine code before executing the function. It does work on Mono, but it doesn't work on MS .NET. Marshal.Prelink(MethodInfo); Is there a way of prelinking a DynamicMethod on MS .NET? I thought adding a boolean parameter to the function that if set, exits the function immediately so no code is actually executed. I could "prelink" that way, but I think that's a nasty solution I want to avoid. Any idea?

    Read the article

  • How do I construct a more complex single LINQ to XML query?

    - by Cyberherbalist
    I'm a LINQ newbie, so the following might turn out to be very simple and obvious once it's answered, but I have to admit that the question is kicking my arse. Given this XML: <measuresystems> <measuresystem name="SI" attitude="proud"> <dimension name="mass" dim="M" degree="1"> <unit name="kilogram" symbol="kg"> <factor name="hundredweight" foreignsystem="US" value="45.359237" /> <factor name="hundredweight" foreignsystem="Imperial" value="50.80234544" /> </unit> </dimension> </measuresystem> </measuresystems> I can query for the value of the conversion factor between kilogram and US hundredweight using the following LINQ to XML, but surely there is a way to condense the four successive queries into a single complex query? XElement mss = XElement.Load(fileName); IEnumerable<XElement> ms = from el in mss.Elements("measuresystem") where (string)el.Attribute("name") == "SI" select el; IEnumerable<XElement> dim = from e2 in ms.Elements("dimension") where (string)e2.Attribute("name") == "mass" select e2; IEnumerable<XElement> unit = from e3 in dim.Elements("unit") where (string)e3.Attribute("name") == "kilogram" select e3; IEnumerable<XElement> factor = from e4 in unit.Elements("factor") where (string)e4.Attribute("name") == "pound" && (string)e4.Attribute("foreignsystem") == "US" select e4; foreach (XElement ex in factor) { Console.WriteLine ((string)ex.Attribute("value")); }

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • C# async callback on disposed form

    - by Rodney Burton
    Quick question: One of my forms in my winform app (c#) makes an async call to a WCF service to get some data. If the form happens to close before the callback happens, it crashes with an error about accessing a disposed object. What's the correct way to check/handle this situation? The error happens on the Invoke call to the method to update my form, but I can't drill down to the inner exception because it says the code has been optimized. The Code: public void RequestUserPhoto(int userID) { WCF.Service.BeginGetUserPhoto(userID, new AsyncCallback(GetUserPhotoCB), userID); } public void GetUserPhotoCB(IAsyncResult result) { var photo = WCF.Service.EndGetUserPhoto(result); int userID = (int)result.AsyncState; UpdateUserPhoto(userID, photo); } public delegate void UpdateUserPhotoDelegate(int userID, Binary photo); public void UpdateUserPhoto(int userID, Binary photo) { if (InvokeRequired) { var d = new UpdateUserPhotoDelegate(UpdateUserPhoto); Invoke(d, new object[] { userID, photo }); } else { if (photo != null) { var ms = new MemoryStream(photo.ToArray()); var bmp = new System.Drawing.Bitmap(ms); if (userID == theForm.AuthUserID) { pbMyPhoto.BackgroundImage = bmp; } else { pbPhoto.BackgroundImage = bmp; } } } }

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >