Search Results

Search found 71506 results on 2861 pages for 'csharp source net'.

Page 170/2861 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Why are my installed fonts not available in .NET?

    - by Dan Herbert
    I'm trying to render some images with text using a font I just added to my machine and no matter what I do, I can't seem to get the font to become accessible in .NET. I tried using PrivateFontCollection.AddFontFile(filename) and PrivateFontCollection.AddMemoryFont(...) to load the font into memory. Whenever I do this, the method throws a "File Not Found" exception, which is unusual because I get this exception when loading the font from memory, where there should be no files to be "not found". Initially, I thought it may be because the font I was trying to use was in the .pfm format, so I converted the font to .otf and had the same problem. Then I tried installing the .otf font to my Windows Fonts folder so I could pull it from FontFamily.Families. Once I installed the font, it became available in Microsoft Word & Notepad2. However, when I try to load it from FontFamily.Families, it is not included in the array. I thought rebooting my machine would fix the issue but obviously there is something more complicated involved here. Is there something basic I just might have missed when installing the font in my machine (Windows Vista), or is there another way to programmatically load a font that I should be using instead? Is .otf not supported in .NET?

    Read the article

  • Why do Asp.net timers/updatepanels leak memory and can it be fixed/worked around?

    - by KallDrexx
    I have built a suite of internal websites for our company to manage some of our processes. I have been noticing that these pages have massive memory leaks that cause the pages to be using well over 150mb of memory, which is ridiculous for a webpage that consists of a single form and a GridView that is displaying 7-10 rows of data at a time, sometimes with the data not changing for a whole day. This data does need to be refreshed on a semi-regular basis so that we always see the latest results and can act on them. After some testing it appears that the memory leak is extremely easy to reproduce, and very noticeable. I created a page with the following asp.net markup: <body> <form id="form1" runat="server"> <div> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <asp:Timer ID="timer1" runat="server" Interval="1000" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> There is absolutely no code behind for this. This is the entirety of the page. Running this site in Chrome shows the memory usage shoot up to 25 megs in the span of 20-30 seconds. Leaving it running for a few minutes makes the memory go up to the 70 megs and such. Am I using timers and update panels wrong, or is this a pure Asp.net issue with no work around?

    Read the article

  • .NET WF4: Should it be in the middle of everything?

    - by stimpy77
    I am aware that WF4 (Windows Workflow 4.0, part of .NET 4.0) is a significant rework and redesign of WF3, where much of what made WF3 a poor technology choice has been cleaned up in WF4. For example, as far as I can tell, WF4 (Windows Workflow 4.0) activities are more or less testable with [TestMethod] and mocking. This among other things, like improved performance, has grabbed my attention about the technology again, whereas I had previously pooh-poohed WF3. I'm working on a new architecture for essentially an n-tier collaborative application (not enterprise-class, just a smallish project with potential to grow significantly) where I'm already trying to discipline myself to use IoC and, to some extent, TDD, and I'm wondering, in general terms, whether it is wiser to just hand-code workflow logic or if I should delve into learning and integrating WF4 so that WF becomes literally the controller of the entire application, i.e. the practical C in "MVC" (not ASP.NET MVC but rather the pattern). So should workflow activities in WF4 be the primary controller for a highly expandable/growable web-based collaborative application? Or am I asking entirely the wrong question? This is a vague question, I'm sure, so abstract answers are as welcome as specific ones.

    Read the article

  • Spring.net is not injecting chained base class properties!

    - by JohnIdol
    I am successfully injecting base class properties with spring.net with just a class that inherits from a base abstract class. Lets say Class MyClass : MyBase, and I am successfully setting a property like this: <object id="myInstantiableClass" type="myAssembly.MyClass myAssenbly" abstract="true"> <property name="MyBaseClassProperty" ref="anotherObjRef"></property> </object> Where MyBaseClassProperty is a property on the base class. Now I have another abstract class between the old base class and the instantiable class, and I am trying to set properties on both the abstract classes. So MyClass : MyNewBaseClass, and MyNewBaseClass : MyBaseClass. I have an additional property on the new base class (MyNewBaseClassProperty) and I am trying to inject it like this: <object id="myInstantiableClass" type="myAssembly.MyClass myAssenbly" abstract="true"> <property name="MyBaseClassProperty" ref="anotherObjRef"></property> <property name="MyNewBaseClassProperty" ref="someOtherObjRef"></property> </object> The property on the old base class is being injected but the one on the new one is not - and I am not getting an error or anything (so I am pretty sure my config is good), that property is just null! Any help appreciated! P.S. I am on asp.net (not MVC) but I don't think it's related.

    Read the article

  • When should I be cautious using about data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

  • How to target multiple versions of .NET Framework from MSBuild?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/sln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software. I notice that a lot of people use NAnt (e.g. Ninject builds many targets with NAnt) but I'm unsure if this is necessary or if they are just more familiar with it. I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • Where to put data management rules for complex data validation in ASP.NET MVC?

    - by TheRHCP
    Hello, I am currently working on an ASP.NET MVC2 project. This is the first time I am working on a real MVC web application. The ASP.NET MVC website really helped me to get started really fast, but I still have some obscure knowledge concerning datamodel validation. My problem is that I do not really know where to manage my filled datamodel when it comes to complex validation rules. For example, validating a string field with a Regex is quite easy and I know that I just have to decorate my field with a specific attribute, so data management rules are implemented in the model. But if I have multiple fields that I need to validate which each other, for example multiple datetime that need to be correctly set following a specific time rule, where do I need to validate them? I know that I could create my own validation attributes, but sometimes validation ask a specific validation path which is to complex to be validated using attributes. This first question also leads me to a related question which is, is it right to validate a model in the controller? Because for the moment that is the only way I found for complex validation. But I find this a bit dirty and I feel it does not really fit a the controller role and much harder to test (multiple code path). Thanks.

    Read the article

  • Please help debug this ASP.Net [VB] code. Trying to write to text file from SQL Server DB.

    - by NJTechGuy
    I am a PHP programmer. I have no .Net coding experience (last seen it 4 years ago). Not interested in code-behind model since this is a quick temporary hack. What I am trying to do is generate an output.txt file whenever the user submits new data. So an output.txt file if exists should be replaced with the new one. I want to write data in this format : 123|Java Programmer|2010-01-01|2010-02-03 124|VB Programmer|2010-01-01|2010-02-03 125|.Net Programmer|2010-01-01|2010-02-03 I don't know VB, so not sure about string manipulations. Hope a kind soul can help me with this. I will be grateful to you. Thank you :) <%@ Import Namespace="System.IO" %> <%@ Import Namespace="System.Data" %> <%@ Import Namespace="System.Data.SqlClient" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim sqlConn As New SqlConnection("Data Source=winsqlus04.1and1.com;Initial Catalog=db28765269;User Id=dbo2765469;Password=ByhgstfH;") Dim myCommand As SqlCommand Dim dr As SqlDataReader Dim FILENAME as String = Server.MapPath("Output4.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) Try sqlConn.Open() 'opening the connection myCommand = New SqlCommand("SELECT id, title, CONVERT(varchar(10), expirydate, 120) AS [expirydate],CONVERT(varchar(10), creationdate, 120) AS [createdate] from tblContact where flag = 0 AND ACTIVE = 1", sqlConn) 'executing the command and assigning it to connection dr = myCommand.ExecuteReader() While dr.Read() objStreamWriter.WriteLine("JobID: " & dr(0).ToString()) objStreamWriter.WriteLine("JobID: " & dr(2).ToString()) objStreamWriter.WriteLine("JobID: " & dr(3).ToString()) End While dr.Close() sqlConn.Close() Catch x As Exception End Try objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" />

    Read the article

  • ASP.NET MVC: How to create a usable UrlHelper instance?

    - by Marek
    I am using quartz.net to schedule regular events within asp.net mvc application. The scheduled job should call a service layer script that requires a UrlHelper instance (for creating Urls based on correct routes (via urlHelper.Action(..)) contained in emails that will be sent by the service). I do not want to hardcode the links into the emails - they should be resolved using the urlhelper. The job: public class EvaluateRequestsJob : Quartz.IJob { public void Execute(JobExecutionContext context) { // where to get a usable urlHelper instance? ServiceFactory.GetRequestService(urlHelper).RunEvaluation(); } } Please note that this is not run within the MVC pipeline. There is no current request being served, the code is run by the Quartz scheduler at defined times. How do I get a UrlHelper instance usable on the indicated place? If it is not possible to construct a UrlHelper, the other option I see is to make the job "self-call" a controller action by doing a HTTP request - while executing the action I will of course have a UrlHelper instance available - but this seems a little bit hacky to me.

    Read the article

  • When should I be cautious using data binding in .NET?

    - by Ben McCormack
    I just started working on a small team of .NET programmers about a month ago and recently got in a discussion with our team lead regarding why we don't use databinding at all in our code. Every time we work with a data grid, we iterate through a data table and populate the grid row by row; the code usually looks something like this: Dim dt as DataTable = FuncLib.GetData("spGetTheData ...") Dim i As Integer For i = 0 To dt.Rows.Length - 1 '(not sure why we do not use a for each here)' gridRow = grid.Rows.Add() gridRow(constantProductID).Value = dt("ProductID").Value gridRow(constantProductDesc).Value = dt("ProductDescription").Value Next '(I am probably missing something in the code, but that is basically it)' Our team lead was saying that he got burned using data binding when working with Sheridan Grid controls, VB6, and ADO recordsets back in the nineties. He's not sure what the exact problem was, but he remembers that binding didn't work as expected and caused him some major problems. Since then, they haven't trusted data binding and load the data for all their controls by hand. The reason the conversation even came up was because I found data binding to be very simple and really liked separating the data presentation (in this case, the data grid) from the in-memory data source (in this case, the data table). "Loading" the data row by row into the grid seemed to break this distinction. I also observed that with the advent of XAML in WPF and Silverlight, data-binding seems like a must-have in order to be able to cleanly wire up a designer's XAML code with your data. When should I be cautious of using data-binding in .NET?

    Read the article

  • How to use MSBuild to target multiple versions of .NET Framework?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/soln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software (e.g. NAnt). I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • ADO.NET Multiple simultaneous reads from an open database.

    - by Deverill
    Answer not needed - my logic was wrong and this question is invalid. Charles helped me see where I went off-tracks. Thanks I have a utility that moves data from one source to another. In the process of writing the record I check to see if it exists and do an update/insert as necessary. The difficulty I have is that as I'm writing the main record info there is a 2nd table for "custom data" that I have to check to see if it exists and do an update/insert for that as well. Example: I may be loading a pencil sharpener that may or may not exist. While I'm writing it into destination it has characteristics such as style, color, etc. and each of them may or may not exist. As written I seem to need to have 2 DataReaders open, one for the sharpener and one to check for and update color. I am new to ADO.NET, but not to programming and it's more complicated than I listed but for sanity's sake I didn't put all the details. My question is: What am I missing? You can't have 2 readers open at the same time on a connection, yet I can't close the first if I'm stepping through all products. It seems inefficient to have 2 connections, readers, etc. for this. Is there a feature of ADO.NET DBs that I'm missing? How would you do it? Thanks!

    Read the article

  • Asp.Net 2 integrated sites How to Logout second site programatically.

    - by NBrowne
    Hi , I am working with an asp.net 2.0 site (call it site 1) which has an iframe in it which loads up another site (site2) which is also an asp.net site which is developed by our team. When you log onto site 1 then behind the scenes site 2 is also logged in so that when you click the iframe tab then this displays site 2 with the user logged in (to prevent the user from having to log in twice). The problem i have is that when a user logs out of site 1 then we call some cleanup methods to perform FormsAuthentication.SignOut and clean session variables etc but at the moment no cleanup is called when the user on site 2. So the issue is that if the user opens up Site 2 from within a browser then website 2 opens with the user still logged in which is undesired. Can anyone give me some guidance as to the best approach for this?? One possible approach i though of was just that on click of logout button i could do a call to a custom page on Site 2 which would do the logout. Code below HttpWebRequest request; request = ((HttpWebRequest)(WebRequest.Create("www.mywebsite.com/Site2Logout.aspx"))); request.Method = "POST"; HttpCookie cookie = HttpContext.Current.Request.Cookies[FormsAuthentication.FormsCookieName]; Cookie authenticationCookie = new Cookie( FormsAuthentication.FormsCookieName, cookie.Value, cookie.Path, HttpContext.Current.Request.Url.Authority); request .CookieContainer = new CookieContainer(); request .CookieContainer.Add(authenticationCookie); response.GetResponse(); Problem i am having with this code is that when i run it and debug on Site 2 and check to see if the user is Authenticated they are not which i dont understand because if i open browser and browse to Site 2 i am Still authenticated. Any ideas , different direction to take etc ??? Please let me know if you need any more info or if i something i have said dosent make sense. Thanks

    Read the article

  • Ensure that my C# desktop application is making requests to my ASP .NET MVC action?

    - by Mathias Lykkegaard Lorenzen
    I've seen questions that are almost identical to this one, except minor but important differences that I would like to get detailed. Let's say that I have a controller and an action method in MVC which therefore accepts requests on the following URL: http://example.com/api/myapimethod?data=some-data-here. This URL is then being called regularly by 1000 clients or more spread out in the public. The reason for this is crowdsourcing. The clients around the globe help feed a global cache on my server, which makes it faster for the rest of the clients to fetch the data. Now, if I'm sneaky (and I am), I can go into Fiddler, Ethereal, Wireshark or any other packet sniffing tool and figure out which requests the program is making. By figuring that out, I can also replicate them, and fill the service with false corrupted data. What is the best approach to ensuring that the data received in my ASP .NET MVC action method is actually from the desktop client application, and not some falsely generated data that the user invented? Since it is all based on crowdsourcing, would it be a good idea for my users to be able to "vote" if some data is falsified, and then let an automatic cleanup commence if there are enough votes? I do not have access to a tool like SmartAssembly, so unfortunately my .NET program is fully decompilable. I realize this might be impossible to accomplish in an error-proof manner, but I would like to know where my best chances are.

    Read the article

  • Matlab and .NET Interaction

    - by adyaron
    Hi All! I'm having an issue interacting between Matlab and .NET. I've managed to call .NET methods from Matlab code and vice versa. However, if I call a .NET method that, in turn, instantiates a Matlab object, it crashes (with a type initialization exception). Think about this scenario: a .NET assembly is interacting with a Matlab dll that was deployed for .NET (not native) by Matlab's deploytool. Now, when I'm loading the above .NET dll in a Matlab program everything is ok until I run a method that utilizes the other Matlab dll. Only then everything crashes. The exact message is: Warning: Cannot initialize MATLAB Compiler-generated software component in MATLAB. MATLAB Compiler-generated software components cannot be used from within MATLAB. Please don't offer not to use Matlab-.NET-Matlab architecture, it's not an option. Thank you very much (I promise to accept the answer that solves the problem :-)), Yaron.

    Read the article

  • VS2008 javascript debugger IE8 "there is no source code available for the current location"

    - by Jeff Keslinke
    I have almost the same problem as this unanswered question. The only difference is I'm using VS2008, but I'm in an MVC project calling this javascript function: function CompanyChange(compCtrl) { alert(compCtrl.value); debugger; var test; for (var i = 0; i < document.all.length; i++) { test = document.all[i]; } } I hit the alert, then I get the message "there is no source code available for the current location." At which point the page becomes unresponsive and I have to manually stop the debugger just to shut it down. I've logged into another machine and ran this exact code and it works fine, I hit the debugger and can step through. I've checked to make sure all settings in VSToolsOptionsDebugging are identical as well as IEOptionsAdvanced and they are. Both machines are Windows 7 Enterprise edition 32-bit, VS2008, IE8. I've also tried attaching a process manually in VS, and using the 'Developer Tools' in IE which didn't work (said there already was a process attached). I was hoping someone may have had this problem and found a work-around because I've already done a lot of searching and tried all the options I've read. Anyone else run into this? Thank you, Jeff

    Read the article

  • Confused with an ASP.NET/WCF WSDL Parsing Error

    - by Vaccano
    I have a WCF Web Service that my ASP.NET app uses. It has been working fine for quite some time. I just added in a Dev Express Grid (and the Dev Express DLLs) and a new page that uses them and now I am getting parsing errors on the WSDL. But the weird part is that it works fine on my machine but fails on the web server machine. (Both are connecting to the same web services WSDL.) Here is the error message I am getting: Server Error in '/MyWebAppWebDev' Application. -------------------------------------------------------------------------------- Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Reference.svcmap: Failed to generate code for the service reference 'MyWebAppService'. Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: Referenced type 'WebClientApp.MyWebAppService.ReferenceUpdatesDataContract, WebClientApp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' with data contract name 'ReferenceUpdatesDataContract' in namespace 'http://schemas.datacontract.org/2004/07/MyWebAppServiceLibrary.DataContracts' cannot be used since it does not match imported DataContract. Need to exclude this type from referenced types. XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='IMyWebAppReferenceDataServiceLib'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='IMyWebAppReferenceDataServiceLib'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:binding[@name='MyWebAppServicesDefaultEndpoint'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:binding[@name='MyWebAppServicesDefaultEndpoint'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:service[@name='MyWebAppReferenceDataServiceLib']/wsdl:port[@name='MyWebAppServicesDefaultEndpoint'] Source Error: [No relevant source lines] Source File: /MyWebAppWebDev/App_WebReferences/MyWebAppService/ Line: 1 I am completely stumped on this. I have checked my web.config endpoint address and it is spot on (and notably is not in the error message above). Any ideas would be welcomed.

    Read the article

  • Dispatcher.Invoke not working in .NET 3.0 SP1

    - by Kapil
    I am developing a WPF windows application and am getting into a trouble running the app in .NET 3.0. Everytime that I try to access the System.Windows.Threading.Dispatcher.Invoke() method, I get a method-not-found error. Basically, I spawn a new thread from the main thread and try to change some UI properties (basically update a progress-bar) from the new thread using the following code: updateStatusDelegate usd = new updateStatusDelegate(progressBar.SetValue); Dispatcher.Invoke(usd, System.Windows.Threading.DispatcherPriority.Background, new object[] { System.Windows.Controls.ProgressBar.ValueProperty, Convert.ToDouble(perc) }); Can someone help me understand why do I encounter this error in .NET 3.0 version? I am able to get this going in .NET 3.0 SP2. But I guess .NET is not distributed independantly and is packaged only with .NET 3.5 version. My goal is to get away with the dependancy of .net 3.5 and have a dependancy on .NET 3.0 version Any help would be appreciated. Thanks Kapil

    Read the article

  • Change the source of image by clicking thumbnail using updatePanel

    - by Batu
    For example, i have this ImageViewer.ascx UserControl: <div class="ImageTumbnails"> <asp:ListView ID="ImageList" runat="server" ItemPlaceholderID="ItemContainer"> <LayoutTemplate> <asp:PlaceHolder ID="ItemContainer" runat="server" /> </LayoutTemplate> <ItemTemplate> <asp:HyperLink runat="server" NavigateUrl='<%# Link.ToProductImage(Eval("ImageFile").ToString())%>'> <asp:Image runat="server" ImageUrl='<%# Link.ToThumbnail(Eval("ImageFile").ToString()) %>' /> </asp:HyperLink> </ItemTemplate> </asp:ListView> </div> <div class="ImageBig"> <asp:Image ID="ProductImageBig" runat="server" ImageUrl="" /> </div> When the thumbnail is clicked it will change the source of ProductImageBig with its hyperlink target. How can i achieve this using UpdatePanel ? ( Or will i be able to )

    Read the article

  • Oracle .NET Provider DLL hell

    - by Pablo Santa Cruz
    I am currently developing on a Win7-32bits computer. Everything works fine. It's a ASP.NET application. I was able to use Microsoft's Oracle deprecated .NET provider to connect to Oracle (using 32 bit instant client) and also ODP.NET. No problems at all. Application runs fine. The problem comes when I deploy it to IIS7 on Windows 2008 Server 64bit computer. I can't get Microsoft's deprecated .NET provider or ODP.NET to work easily. Is there a straightforward way to use a 32bit based ODP.NET or Microsoft's Oracle deprecated .NET provider in Windows 2008 Server 64bits? DLL hell here! Thanks.

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Silverlight Cannot find XML data source

    - by Nick
    Hello.. I am very new to Silverlight development. I understand that this is client side technology therefore the paradyme is differant from that of conventional ASP.NET development. Having said that, I don't understand where my server side code is deployed. I have a silver light \ MVC application. I am trying to read an XML document from within my 'Models' folder. The following peice of code is executed from within a class that is in the same location as the XML document, 'Models'. The load() results in a SystemIOFileNotFound exception. I noticed that when building the application the XML document is not laid down in the same location as the web project's assembly. I assume this is specific to the fact that this is a Silverlight project. Can someone tell me what I'm missing? _xdoc = new XDocument(); _xdoc = XDocument.Load(new Uri("videos.xml",UriKind.Relative).ToString());

    Read the article

  • How to manage and capture database changes across several developers?

    - by Matt Greer
    We have three developers and one tester all working against the same database. We change the schema of the database quite often, and every time we do it tends to have a ripple effect of headaches for everyone else. Are there good practices in place for .NET oriented development against MS SQL Server 2008 for managing this? I am thinking something similar to Rails Migrations and each dev and tester has their own local database. Or is that overkill? It'd at least be nice to have separate test and dev databases, but currently manually keeping two databases in sync is probably worse than our current predicament. LiquiBase seems promising, has anyone successfully used it in a similar environment? Or are there better approaches? We are using SQL Server 2008, VS 2008 and .NET 3.5 if that matters at all.

    Read the article

  • system.Net Tracing - Not able to view Request Body in single line

    - by amz
    Hi All, I am using system.Net tracing to log what is being sent over the wire. I am able to see the Http Request Body content but are in seprate lines. I want to see like below Not like this System.Net Verbose: 0 : [118756] Data from ConnectStream#59274039::Read System.Net Verbose: 0 : [118756] 00000000 : 3C 73 3A 45 6E 76 65 6C-6F 70 65 20 78 6D 6C 6E : System.Net Verbose: 0 : [118756] 00000040 : 3C 73 3A 42 6F 64 79 3E-3C 53 75 62 6D 69 74 41 : < System.Net Verbose: 0 : [118756] 00000080 : 53 75 62 6D 69 74 41 70-70 6C 69 63 61 74 69 6F : SubmitApplicatio System.Net Verbose: 0 : [118756] 00000090 : 6E 52 65 73 75 6C 74 3E-74 72 75 65 3C 2F 53 75 : nResulttrue System.Net Verbose: 0 : [118756] Exiting ConnectStream#59274039::Read() - 232#232

    Read the article

  • Reject (Hard 404) ASP.NET MVC-style URLs

    - by James D
    Hi, ASP.NET MVC web app that exposes "friendly" URLs: http://somesite.com/friendlyurl ...which are rewritten (not redirected) to ASP.NET MVC-style URLs under the hood: http://somesite.com/Controller/Action The user never actually sees any ASP.NET MVC style URLS. If he requests one, we hard 404 it. ASP.NET MVC is (in this app) an implementation detail, not a fundamental interface. My question: how do you examine an arbitrary incoming URL and determine whether or not that URL matches a defined ASP.NET MVC path? For extra credit: how do you do it from inside an ASP.NET-style IHttpModule, where you're getting invoked upstream from the ASP.NET MVC runtime? Thanks!

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >