Search Results

Search found 58023 results on 2321 pages for 'tulsa developers net use'.

Page 636/2321 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Graduated transition from Green - Yellow - Red

    - by GoldBishop
    I have am having algorithm mental block in designing a way to transition from Green to Red, as smoothly as possible with a, potentially, unknown length of time to transition. For testing purposes, i will be using 300 as my model timespan but the methodology algorithm design needs to be flexible enough to account for larger or even smaller timespans. Figured using RGB would probably be the best to transition with, but open to other color creation types, assuming its native to .Net (VB/C#). Currently i have: t = 300 x = t/2 z = 0 low = Green (0, 255, 0) mid = Yellow (255, 255, 0) high = Red (255, 0, 0) Lastly, sort of an optional piece, is to account for the possibility of the low, mid, and high color's to be flexible as well. I assume that there would need to be a check to make sure that someone isnt putting in low = (255,0,0), mid=(254,0,0), and high=(253,0,0). Outside of this anomaly, which i will handle myself based on the best approach to evaluate a color. Question: What would be the best approach to do the transition from low to mid and then from mid to high? What would be some potential pitfalls of implementing this type of design, if any?

    Read the article

  • how to troubleshoot sql server issues

    - by joe
    i have an ASP .net application with sql server database, and i am wondering if you can give your ideas on how to troubleshoot the following issue: i can insert / update / delete from any table, but i have one page that uses transactions to insert into different tables. the c# code is correct and very simple, but it fails. i used the sql profiler to see how my app interacts with the DB, especially that the app is using stored procedures, i can catch the exec procedure statement and run it manually from SSMS and it works fine, but the same stored procedure fails from the application!!! which lead me to think that issue is coming from the user account and settings, i am no expert in sql server and wondering if anyone can explain how to verify the required settings for user account. thanks EDIT: in web.config here is how i reference my connection <connectionStrings> <add name="Conn" connectionString="Data Source=localhost;Initial Catalog=myDB;Persist Security Info=True;User ID=DbUser;Password=password1254_3" providerName="System.Data.SqlClient"> </connectionstring> EDIT: i will try to describe the process here: 1- i begin a transaction 2- i call a stored proc to insert (which succeeds) and return the scope identity ( that will be used in the next step) 3- i call another stored procedure to insert some more info + scope identity from step 2, which is a foreign key here 4- i get error, foreign key violation 5- transaction rolled back, now tables empty again... thanks

    Read the article

  • What can I do about Hack Attempts

    - by Matt
    I have an ASP.net website hosted using the Ultidev Web Server Pro. Every day I get a steady stream of errors generated by my application where page requests were requested and denied. This is obviously someone/something trying to find any exploits on my website. Here is an example log: 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpmyadmin/index.php 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpMyAdmin/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/phpMyAdmin-2/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/php-my-admin/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.3/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.6/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.1/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.4/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc1/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc2/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-pl1/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc1/index.php 28/08/2012 11:37:17 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc2/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7/index.php 28/08/2012 11:37:19 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7-pl1/index.php 28/08/2012 13:52:07 - File not Found:http://MyWebServer/admin/pma/translators.html Is this normal? Is there anything I can do to protect myself against this?

    Read the article

  • Sending an email with attachment from server side

    - by SaravananArumugam
    I have to create a word document in a specific format and send it as attachment to some email addresses. I have a preview screen for the report which on approval has to be send in email. This is an ASP.NET MVC 3 application. I am left with a few options here. I am creating the preview using html. I can convert this html into doc and send it, which would be a straight solution. But capturing the Response object's output is being a tough job. I thought of using Mail merge functionality of MS word, where I'll be filling the placeholders of the doc template. But the problem is conceptually, it doesn't appear to be mail merge. I have found someone suggesting to use RTF format and replace the placeholders with database values. Which is the right thing to do? What's the best solution here? Is there any other option than the three listed above?

    Read the article

  • Windows 7+ desktop apps - what's the best UI toolkit for a new project?

    - by Chris Adams
    I'm trying to make a decision for a new Windows desktop app: what to use for the UI. (This is a desktop app that needs to have compatibility with Windows 7. It won't be distributed on the Windows Store.) This application is going to be cross-platform. I intend on writing the core in C++, and using each platform's native UI toolkit. I feel this is preferable to using a cross-platform toolkit like Qt, as it allows me to keep the native look and feel of each platform. On the Windows side, the UI situation isn't exactly clear. I'm getting the feeling that Microsoft is slowly abandoning .NET, particularly as their preferred toolkit for desktop apps. Indeed, the Getting Started chapter for Windows 7, as well as the rest of Microsoft's documentation, seems to be more suited for C++. I have a few options here: C# with WPF - This sesms like this might be the best Microsoft has to offer for Windows 7 desktop apps, even if it isn't their "preferred" toolkit. I'd need to use P/Invoke to call my C++ code. C++ with Direct2D - This is what Microsoft used in one of their examples. This feels like it's too low-level. Part of the appeal of a higher-level UI toolkit is the consistency with the native look and feel of the platform, so doing this would just feel strange. C++ with a third-party UI toolkit, like Qt There might be some other options I'm missing, which I'd love to hear about. So, if you were starting a new Windows 7+ desktop app today, what would you use?

    Read the article

  • Minimizing data sent over a webservice call on expensive connection

    - by aceinthehole
    I am working on a system that has many remote laptops all connected to the internet through cellular data connections. The application will synchronize periodically to a central database. The problem is, due to factors outside our control, the cost to move data across the cellular networks are spectacularly expensive. Currently the we are sending a compressed XML file across the wire where it is being processed and various things are done with (mainly stuffing it into a database). My first couple of thoughts were to convert that XML doc to json, just prior to transmission and convert back to XML just after receipt on the other end, and get some extra compression for free without changing much. Another thought was to test various other compression algorithms to determine the smallest one possible. Although, I am not entirely sure how much difference json vs xml would make once it is compressed. I thought that their must be resources available that address this problem from an information theory perspective. Does anyone know of any such resources or suggestions on what direction to go in. This developed on the MS .net stack on windows for reference.

    Read the article

  • How do I mashup Google Maps with geolocated photos from one or more social networks?

    - by PureCognition
    I'm working on a proof of concept for a project, and I need to pin random photos to a Google Map. These photos can come from another social network, but need to be non-porn. I've done some research so far, Google's Image Search API is deprecated. So, one has to use the Custom Search API. A lot of the images aren't photos, and I'm not sure how well it handles geolocation yet. Twitter seems a little more well suited, except for the fact that people can post pictures of pretty much anything. I was also going to look into the API's for other networks such as Flickr, Picasa, Pinterest and Instagram. I know there are some aggregate services out there that might have done some of this mash-up work for me as well. If there is anyone out there that has a handle on social APIs and where I should look for this type of solution, I would really appreciate the help. Also, in cases where server-side implementation matters, I'm a .NET developer by experience.

    Read the article

  • If I implement a web-service, how do I respond to POST requests with JSON?

    - by Vova Stajilov
    I have to make a rather complex system for my diploma work. Logically it will consist of the following components: Database Web-service Management with web interface Client iOS application that will consume web-service I decided to implement all the first three components under .NET. Firstly I will create a database depending on the information load - this is clear. But then I need a web-service that will return data in JSON format for iOS clients to consume - that's obvious and not that hard to implement. For this I will use WCF technology. Now I have a question, if I implement the web-service, how will I be able to respond to POST requests with JSON? It probably involves WCF JSON or something related? But I also need some web pages as admin part, so will this web-application be able to consume my centralized web-services as well or I should develop it separately? I just want my web service to act like a set of controllers. There is a related question here but this doesn't quite reflect my question.

    Read the article

  • What exactly do I have to pay attention for when choosing Windows Hosting Provider?

    - by user850010
    This is my first time choosing a hosting company. It is for a web site made in asp.net mvc3. So I was thinking choosing a provider would be easy since I found this page http://www.microsoft.com/web/Hosting/Home which contains hosting offers. Now hours later, I am still searching. The reason is that as soon as I start investigating about particular company, something stands out that I do not like. Here are some examples what I noticed when checking various companies in more detail: Company "about us" page is lacking in information about their company. Few of them had just general description what they do and nothing else, while some others had information like company name but had no address. Checking company name in Business Registry Searches gave no results. Two of the companies I checked had both company name and address but I was unable to find them in the registry. Putting company domain into Google gave mostly results from that domain or web hosting review sites but not much else. I am assuming that good companies should have search results from other sites too. Low Alexa Traffic Rank. There was one company which had a site that looked very professional but their alexa traffic ranking was like 2 million. Are there any other factors I should pay attention to when choosing a hosting company? Do I have legitimate concerns or am I just too paranoid?

    Read the article

  • REST Service and CQRS

    - by Paul Wade
    I am struggling with architecture on a new project. I am using the following patterns/technology. CQRS - anything going in goes through a command REST - using WebAPI MVC - asp.net mvc Angular - building a spa nhibernate I belive this provides some great separation and should help keep a very complex domain from growing into a giant set of services that mix queries with other business logic. The REST services have become non restful. They are putting methods in rest that are "SearchByDate", "SearchByItem" etc. Service Methods that execute commands are called with a "web" model class, a new command is built in the service and executed, I feel like there is a lot of extra code. I expected this to be much different but I wasn't around to keep things on track. Finally my questions are this... I would have liked to see PUT Person (CreatePersonCommand) but then I realized that isn't restful either is it? the put should be a person entity not a command. Can I make CQRS and REST service work together or am I going about this all wrong? How do I handle service methods that don't fit into a REST model. I am not performing CRUD on the object but rather executing some business logic. I.E. I don't want the UI to be responsible for how a shipment is "unshipped" I want the service layer to worry about that.

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • How do I trust an off site application

    - by Pieter
    I need to implement something similar to a license server. This will have to be installed off site at the customers' location and needs to communicate with other applications at the customers' site (the applications that use the licenses) and an application running in our hosting center (for reporting and getting license information). My question is how to set this up in a way I can trust that: The license server is really our application and not something that just simulates it; and There is no "man in the middle" (i.e. a proxy or something that alters the traffic). The first thing I thought of was to use with client certificates and that would solve at least 2. However, what I'm worried about is that someone just decompiles (this is build in .NET) the license server, alters some logic and recompiles it. This would be hard to detect from both connecting applications. This doesn't have to be absolutely secure since we have a limited number of customers whom we have a trust relationship with. However, I do want to make it more difficult than a simple decompile/recompile of the license server. I primarily want to protect against an employee or nephew of the boss trying to be smart.

    Read the article

  • ImageResizer - AzureReader2 with Azure SDK 2.2

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/10/29/imageresizer---azurereader2-with-azure-sdk-2.2.aspxSo Azure SDK 2.2 came out recently, which means I can open my azure projects in VS 2013 (yay), so I decided to do an upgrade of my MVC4 project to MVC5, I followed this link on how to do the upgrade, and generally things went ok. I fire up my app, and run into a ‘binding’ issue, that AzureReader2 was trying to use Microsoft.WindowsAzure.Storage, Version=2.1.0.0 but alas, it couldn’t find it. I am not the only one (see stackoverflow), and one solution is to run ‘Add-BindingRedirect’ from the Package Manager Console, but that didn’t solve the problem for me, as it didn’t pick up on the Azure stuff, so I resorted to adding the redirect manually. So, in short, to get AzureReader2 to work with Azure SDK 2.2, you need to add the following to your web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <!-- Other bindings here! --> <dependentAssembly> <assemblyIdentity name="Microsoft.WindowsAzure.Storage" publicKeyToken="31bf3856ad364e35" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-2.1.0.3" newVersion="2.1.0.3"/> </dependentAssembly> </assemblyBinding>

    Read the article

  • Acceptable placement of the composition root using dependency injection and inversion of control containers

    - by Lumirris
    I've read in several sources including Mark Seemann's 'Ploeh' blog about how the appropriate placement of the composition root of an IoC container is as close as possible to the entry point of an application. In the .NET world, these applications seem to be commonly thought of as Web projects, WPF projects, console applications, things with a typical UI (read: not library projects). Is it really going against this sage advice to place the composition root at the entry point of a library project, when it represents the logical entry point of a group of library projects, and the client of a project group such as this is someone else's work, whose author can't or won't add the composition root to their project (a UI project or yet another library project, even)? I'm familiar with Ninject as an IoC container implementation, but I imagine many others work the same way in that they can scan for a module containing all the necessary binding configurations. This means I could put a binding module in its own library project to compile with my main library project's output, and if the client wanted to change the configuration (an unlikely scenario in my case), they could drop in a replacement dll to replace the library with the binding module. This seems to avoid the most common clients having to deal with dependency injection and composition roots at all, and would make for the cleanest API for the library project group. Yet this seems to fly in the face of conventional wisdom on the issue. Is it just that most of the advice out there makes the assumption that the developer has some coordination with the development of the UI project(s) as well, rather than my case, in which I'm just developing libraries for others to use?

    Read the article

  • About shared (static) Members and its behavior

    - by Allende
    I just realized that I can access shared members from instances of classes (probably this is not correct, but compile and run), and also learn/discover that, I can modify shared members, then create a new instance and access the new value of the shared member. My question is, what happens to the shared members, when it comes back to the "default" value (class declaration), how dangerous is it do this ? is it totally bad ? is it valid in some cases ?. If you want to test my point here is the code (console project vb.net) that I used to test shared members, as you can see/compile/run, the shared member "x" of the class "Hello" has default value string "Default", but at runtime it changes it, and after creating a new object of that class, this object has the new value of the shared member. Module Module1 Public Class hello Public Shared x As String = "Default" Public Sub New() End Sub End Class Sub Main() Console.WriteLine("hello.x=" & hello.x) Dim obj As New hello() Console.WriteLine("obj.x=" & obj.x) obj.x = "Default shared memeber, modified in object" Console.WriteLine("obj.x=" & obj.x) hello.x = "Defaul shared member, modified in class" Console.WriteLine("hello.x=" & hello.x) Dim obj2 As New hello() Console.WriteLine("obj2.x=" & obj2.x) Console.ReadLine() End Sub End Module UPDATE: First at all, thanks to everyone, each answer give feedback, I suppose, by respect I should choose one as "the answer", I don't want to be offensive to anyone, so please don't take it so bad if I didn't choose you answer.

    Read the article

  • Unit testing and Test Driven Development questions

    - by Theomax
    I'm working on an ASP.NET MVC website which performs relatively complex calculations as one of its functions. This functionality was developed some time ago (before I started working on the website) and defects have occurred whereby the calculations are not being calculated properly (basically these calculations are applied to each user which has certain flags on their record etc). Note; these defects have only been observed by users thus far, and not yet investigated in code while debugging. My questions are: Because the existing unit tests all pass and therefore do not indicate that the defects that have been reported exist; does this suggest the original code that was implemented is incorrect? i.e either the requirements were incorrect and were coded accordingly or just not coded as they were supposed to be coded? If I use the TDD approach, would I disgregard the existing unit tests as they don't show there are any problems with the calculations functionality - and I start by making some failing unit tests which test/prove there are these problems occuring, and then add code to make them pass? Note; if it's simply a bug that is occurring that can be found while debugging the code, do the unit tests need to be updated since they are already passing?

    Read the article

  • Back button after doing posts on the same page

    - by user441521
    I have 3 pages to my site. The 1st page allows you to select a bunch of options. Those options get sent to the 2nd page to be displayed with some data about those options. From here I can click on a link to get to page 3 on 1 of the options. On page 3 I can create a new/edit/delete all on the same page where reloads come back to page 3. I want a "back" button on page 3 to go back to page 2, but retain the options it had from the original page 1 request. Page 1 has a bunch of check boxes which are passed to page 2 as arrays to the controller. My thought is that I have to pass these arrays (I converted them to lists) to page 3 (even thought page 3 directly doesn't need them) so that page 3 can use them in the back link to it can recreate the values page 1 sent to page 2 originally. I'm using asp.net MVC and when I pass the converted array it seems to convert it to the type instead of actually showing the values: "types=System.Collections.Generic.List" (where types is a List. Is this what is needed or are there other options to getting a "back" button in this case to go back to page 2. It's sort of a pain to pass information to page 3 that isn't really relevant to page 3 except the back button.

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • Usual Suspects: Typical 3rd Party Entities in E-Commerce [closed]

    - by zharvey
    I am doing some requirements/analysis for a web app that I'd like to build (Ruby/Java developer here). This web app would have a store front, shopping cart and would need to be totally compliant with all e-com best practices. It's amazing how much non-technical info comes up when you search for phrases like "how does e-commerce work", but very little comes up in the way of technical details. As such, I'm having extreme frustration finding answers to what I consider pretty straight-forward questions. I came here because I believe this question is not off-topic; if it is, please leave a comment as to why this question does not belong here and I will happily remove it myself (upvotes if your comment can point me to the correct place for this question!). So then: What 3rd parties will I need to work with to have a modern, web-compliant e-com site? So far I can account for a payment gateway provider like Authorize.net and an SSL certificate provider like Trustwave. Any others? What other standards besides PCI compliance will I be held to (besides governing laws, of course!)? Vulnerability scans: PCI compliance requires quarterly scans: if I'm a "Level 4" (low volume) Merchant does that still apply to me? Irregardless, my backend architecture is quite huge, with web servers, app servers, database, message brokers and more. Do each of these servers need to be scanned?!? If not what servers do need to get these quarterly scans? I usually hate to ask micro-questions inside of one large one, but these are so closely-related I just felt like asking them all separately would be spamming the site with too many petty questions. Thanks in advance!

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • What level of detail to use in an interface members descriptions?

    - by famousgarkin
    I am extracting interfaces from some classes in .NET, and I am not completely sure about what level of detail of description to use for some of the interface members (properties, methods). An example: interface ISomeInterface { /// <summary> /// Checks if the object is checked out. /// </summary> /// <returns> /// Returns true if the object is checked out, or if the object locking is not enabled, /// otherwise returns false. /// </returns> bool IsObjectCheckedOut(); } class SomeImplementation : ISomeInterface { public bool IsObjectCheckedOut() { // An implementation of the method that returns true if the object is checked out, // or if the object locking is not enabled } } The part in question is the <returns>...</returns> section of the IsObjectCheckedOut description in the interface. Is it ok to include such a detail about return value in the interface itself, as the code that will work with the interface should know exactly what that method will do? All the current implementations of the method will do just that. But is it ok to limit the possible other/future implementations by description this way? Or should this not be included in the interface description, as there is no way to actually ensure that other/future implementations will do exactly this? Is it better to be as general as possible regarding the interface in such circumstances? I am currently inclined to the latter option.

    Read the article

  • Connection String Incorrectly Formatted [migrated]

    - by Randy E
    I'm running into some issues. Every time I launch into debug mode and hit the "Create User" button, I'm getting an exception being thrown that is due to the Connection String either being in the wrong syntax or just wrong. Using Visual Studio 2010, project is .NET 3.5 with SQL 2008 Express. This is just a personal project that I'm testing some other things with, I know this generally isn't the recommended format but for a personal small project, I don't see the point in doing it any other way. The things I'm testing actually work :) Data Source=.\SQLEXPRESS;AttachDbFilename=|Data Directory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True That doesn't work. Data Source=.\SQLEXPRESS;AttachDbFilename="|Data Directory|ASPAppDatabase.mdf";Integrated Security=True;User Instance=True Neither does that. Data Source=.\SQLEXPRESS;AttachDbFilename='|Data Directory|ASPAppDatabase.mdf';Integrated Security=True;User Instance=True And again, neither does that :/ However, rather than using "|Data directory| if I use the full path to the local DB file it works just fine and no exception is thrown, and I can read and write as I need. And just to cover all my bases, here is the button click event that creates the User. protected void btnAddUser_Click(object sender, EventArgs e) { Membership.CreateUser(txtUserName.Text, txtPassword.Text); btnLogin_Click(sender, e); } So, what am I missing in regards to the |Data Directory|? Here is an example of the above not working correctly..taken directly from web.config. <add name="ASPAppDatabaseConnection" connectionString='Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|ASPAppDatabase.mdf;Integrated Security=True;User Instance=True'/>

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >