Search Results

Search found 186 results on 8 pages for 'subsonic simplerepository'.

Page 8/8 | < Previous Page | 4 5 6 7 8 

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • How to set up Windows 7 Professional as a NAS

    - by Enyalius
    I searched and didn't find any answers, so please forgive me if this is a repeat. Anyway, I have an older computer that I'm using as an HTPC, and I was hoping that I could use it as a NAS/multimedia server, as well. My primary uses would include accessing content on my PS3 (same LAN), accessing content from other computers on my home network and (if I can) accessing content from my Android phone over the internet. I have used SubSonic to stream music to my Android phone and other computers before, but I would really like to find a way to do this natively if possible. I know that I can buy external hard disk cases that can plug in the USB port of my router, that I can get a Drobo or other network storage solution, but I would really just rather not spend the money (especially considering that I already have a computer that I should be able to use). Hardware involved: Apple AirPort Extreme base station router (most recent revision) Home Theater Personal Computer: Core 2 Duo @ 2.4GHz, 8GB DDR2 RAM, ~3.5TB hard drive space Sony Playstaiton 3 Thin 120GB HTC Thunderbolt (I have 4G coverage) rooted and running Android 2.2.1 Various Apple laptops Various Windows 7 desktops/laptops Thanks in advance! Note- I have looked at open source NAS software but I would like to preserve the Windows Media Center functionality in Windows 7, so other NAS software is not an option for me currently. .

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Most useful free .NET libraries?

    - by Binoj Antony
    I have used a lot of free .NET libraries, some from Microsoft itself! Which ones have you found the most useful? Dependency Injection/Inversion of Control Unity Framework - Microsoft StructureMap - Jeremy Miller Castle Windsor NInject Spring Framework Autofac Managed Extensibility Framework Logging Logging Application Block - Microsoft Log4Net - Apache Error Logging Modules and Handlers(ELMAH) NLog Compression SharpZipLib DotNetZip YUI Compressor (CSS and JS compression/minification) AjaxMinifier (in other downloads) (JS compression. Also includes MSBuild task) Ajax Ajax Control Toolkit - Microsoft AJAXNet Pro Data Mapper XmlDataMapper AutoMapper ORM NHibernate Castle ActiveRecord Subsonic XmlDataMapper Charting/Graphics Microsoft Chart Controls for ASP.NET 3.5 SP1 Microsoft Chart Controls for Winforms ZedGraph Charting NPlot - Charting for ASP.NET and WinForms PDF Creators/Generators PDFsharp iTextSharp Unit Testing/Mocking NUnit Rhino Mocks Moq TypeMock.Net xUnit.net mbUnit Machine.Specifications Automated Web Testing Selenium Watin URL Rewriting url rewriter UrlRewriting.Net Url Rewriter and Reverse Proxy - Managed Fusion Controls Krypton - Free winform controls Source Grid - A Grid control Devexpress - free controls Unclassified CSLA Framework - Business Objects Framework AForge.net - AI, computer vision, genetic algorithms, machine learning Enterprise Library 4.1 - Logging, Exception Management, Validation, Policy Injection File helpers library C5 Collections - Collections for .NET Quartz.NET - Enterprise Job Scheduler for .NET Platform MiscUtil - Utilities by Jon Skeet Lucene.net - Text indexing and searching Json.NET - Linq over JSON Flee - expression evaluator PostSharp - AOP IKVM - brings the extensive world of Java libraries to .NET. Title of the question taken from here. [EDIT] Please provide links to these free libraries as well. Once we have a huge list of this, it can be arranged in categories! Please do not mention .NET Applications/EXEs here.

    Read the article

  • Problem with JSONResult

    - by vikitor
    Hi, I' still newby to this so I'll try to explain what I'm doing. Basically what I want is to load a dropdownlist depending on the value of a previous one, and I want it to load the data and appear when the other one is changed. This is the code I've written in my controller: public ActionResult GetClassesSearch(bool ajax, string phylumID, string kingdom){ IList<TaxClass> lists = null; int _phylumID = int.Parse(phylumID); int _kingdom = int.Parse(kingdom); lists = _taxon.getClassByPhylumSearch(_phylumID, _kingdom); return Json(lists.count); } and this is how I call the method from the javascript function: function loadClasses(_phylum) { var phylum = _phylum.value; $.getJSON("/Suspension/GetClassesSearch/", { ajax: true, phylumID: phylum, kingdom: kingdom }, function(data) { alert(data); alert('no fallo') document.getElementById("pClass").style.display = "block"; document.getElementById("sClass").options[0] = new Option("-select-", "0", true, true); //for (i = 0; i < data.length; i++) { // $('#sClass').addOption(data[i].classID, data[i].className); //} }); } The thing is that just like this it works, I pass the function the number of classes within a selected phylum, and it displays the pclass element, the problem gets when I try to populate the slist with data (which should contain the objects retrieved from the database), because when there is data returned by the database changing return Json(lists) instead of return Json(lists.count) I keep getting the same error: A circular reference was detected while serializing an object of type 'SubSonic.Schema.DatabaseColumn'. I've been going round and round debugging and making tests but I can't make it work, and it is suppossed to be a simple thing, but I'm missing something. I have commented the for loop because I'm not quite sure if that's the way you access the data, because I've not been able to make it work when it finds records. Can anyone help me? Thanks in advance, Victor

    Read the article

  • When using Data Annotations with MVC, Pro and Cons of using an interface vs. a MetadataType

    - by SkippyFire
    If you read this article on Validation with the Data Annotation Validators, it shows that you can use the MetadataType attribute to add validation attributes to properties on partial classes. You use this when working with ORMs like LINQ to SQL, Entity Framework, or Subsonic. Then you can use the "automagic" client and server side validation. It plays very nicely with MVC. However, a colleague of mine used an interface to accomplish exactly the same result. it looks almost exactly the same, and functionally accomplishes the same thing. So instead of doing this: [MetadataType(typeof(MovieMetaData))] public partial class Movie { } public class MovieMetaData { [Required] public object Title { get; set; } [Required] [StringLength(5)] public object Director { get; set; } [DisplayName("Date Released")] [Required] public object DateReleased { get; set; } } He did this: public partial class Movie :IMovie { } public interface IMovie { [Required] object Title { get; set; } [Required] [StringLength(5)] object Director { get; set; } [DisplayName("Date Released")] [Required] object DateReleased { get; set; } } So my question is, when does this difference actually matter? My thoughts are that interfaces tend to be more "reusable", and that making one for just a single class doesn't make that much sense. You could also argue that you could design your classes and interfaces in a way that allows you to use interfaces on multiple objects, but I feel like that is trying to fit your models into something else, when they should really stand on their own. What do you think?

    Read the article

  • Should I store generated code in source control

    - by Ron Harlev
    This is a debate I'm taking a part in. I would like to get more opinions and points of view. We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project. Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment. I would like to have a way to trace back the history of the code, even if it is usually treated as a black box. Any arguments or counter arguments? UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue. I won't select an accepted answer at this point for the reasons mentioned above.

    Read the article

  • Inserting "null" (literally) in to a stored procedure parameter.

    - by Nazadus
    I'm trying to insert the word "Null" (literally) in to a parameter for a stored procedure. For some reason SqlServer seems to think I mean NULL and not "Null". If I do a check for IF @LastName IS NULL // Test: Do stuff Then it bypasses that because the parameter isn't null. But when I do: INSERT INTO Person (<params>) VALUES (<stuff here>, @LastName, <more stuff here>); // LastName is 'Null' It bombs out saying that LastName doesn't accept nulls. I would seriously hate to have this last name, but someone does... and it's bombing the application. We're using SubSonic 2.0 (yeah, it's fairly old but upgrading is painful) as our DAL and stepping through it, I see it does create the parameters properly (for what I can tell). I've tried creating a temp table to see if I could replicate it manually but it seems to work just fine. Here is the example I create: DECLARE @myval VARCHAR(50) SET @myval = 'Null' CREATE TABLE #mytable( name VARCHAR(50)) INSERT INTO #mytable VALUES (@myval) SELECT * FROM #mytable DROP table #mytable Any thoughts on how I can fix this?

    Read the article

  • SQL Server lock/hang issue

    - by mattwoberts
    Hi, I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up. I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD. Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue. ALTER PROCEDURE [dbo].[MostPopularRead] AS BEGIN SET NOCOUNT ON; SELECT c.ForeignId , ct.ContentSource as ContentSource , sum(ch.HitCount * hw.Weight) as Popularity , (sum(ch.HitCount * hw.Weight) * 100) / @Total as Percent , @Total as TotalHits from ContentHit ch WITH (NOLOCK) join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId where ch.CreatedDate between @Then and @Now group by c.ForeignId , ct.ContentSource order by sum(ch.HitCount * hw.HitWeightMultiplier) desc END The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert: BEGIN TRAN insert into [ContentHit] (ContentId, HitCount, HitWeightId, ContentHitComment) values (@ContentId, isnull(@HitCount,1), isnull(@HitWeightId,1), @ContentHitComment) COMMIT TRAN The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select. When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs. Can someone please advise how best I can solve this problem? I don't care about reading dirty data... Thanks

    Read the article

  • Nhibernate:null index column for collection Error

    - by Quintin Par
    I am working a subsonic to NH migration(I can’t change the schema) and while creating the mapping I came across this error null index column for collection: Company.Core.CompanyUser.Addresses My mapping from the User side is mapping.HasMany(x => x.Addresses).AsList().KeyColumn("user_id").Cascade.All().Inverse(); xml <list cascade="all" inverse="true" name="Addresses"> <key> <column name="user_id" /> </key> <index /> <one-to-many class="Company.Core.CompanyAddress, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </list> On the Address side it is mapping.CompositeId().KeyReference(x => x.User, "user_id").KeyProperty(x => x.Type); xml <composite-id mapped="false" unsaved-value="undefined"> <key-property name="Type" type="System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Type" /> </key-property> <key-many-to-one name="User" class="Company.Core.CompanyUser, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="user_id" /> </key-many-to-one> </composite-id> When I try to load this collection as user.Addresses I get the index null exception. How do I fix this error?

    Read the article

  • Application that depends heavily on stored procedures

    - by PieterG
    We currently have an application that depends largely on stored procedures. There is a heavy use of temp tables. It's an extremely large application. Facing this situation, I would like to use Entity Framework or Linq2Sql for a rewrite. I might consider using Fluent Hibernate or Subsonic, as i've used them quite extensively in the past. I've had problems with Linq2Sql generating the return types for the stored procedures because of the usage of the temp tables, and I think it's cumbersome to go and change all the stored procedures from temp tables to in-memory tables. Considering the 2 choices that I want to make, which one of the 2 is the best route to go and why? If my choices are extremely idiotic, please provide alternatives. Edit: The reason for the question and the change is that the data access layer is non-existent and was built 10 years ago. We currently still run into a lot of issues with it. I don't want to divulge too much, but if you saw it, your eyes would start bleeding :)

    Read the article

< Previous Page | 4 5 6 7 8