Search Results

Search found 16050 results on 642 pages for 'linq to objects'.

Page 236/642 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • Justification for learning/implementing newer Microsoft technologies

    - by Darren
    I work at a large healthcare organization as a mid-level software developer. I have over 10 years experience in the IT industry using Microsoft technologies (ASP.NET & SQL Server). When I go to conferences, code camps, .net user group meetings, I hear of all kinds of new tools and technologies: MVC, LINQ, Entity Framework, WCF Web Services, etc. I guess you could say I'm in my comfort zone using the same old stuff from asp.net 2.0. I use typed datasets for my data access layer. I use web forms and feature rich server controls with master pages. I know how to use plain old SQL and create queries in my typed datasets to get at data my applications need. Throughout my career, I'm always sensitive to not become obsolete with my skill set. What I currently use works fine and my development time is fast. But I'm concerned that if I were to be laid off, I would be asked in interviews how many MVC apps I've written. Or how I am with LINQ or WCF web services. I know that it doesn't matter how many conferences, books, or videos I watch on some new technology...I have to implement/use it or it simply won't sink in. Also, managers who interview don't care how much someone reads up on something, only real use and experience with a technology. I have a new project to write. I've gone to my manager and have asked for additional time for the project for learning/implementing technology I may not be familiar with. Our organization encourages its employees to "learn and grow" and to continue are education. But I always get resistance when I ask for more time to ramp up on something new to implement. My manager is asking for concrete business reasons for implementing these new technologies. I don't have business reasons. My reasons are because I don't want to become obsolete. I could say it would make the project more maintainable in the future by other developers since at some point people could stop using these older technologies, but that' about all I can think of. Does Linq/Entity Framework/MCV apps perform better? So much so that the customers (users in departments I'm creating this app for) need? I doubt it. I'm interested in you guy's thoughts on this. Do many of you have similar plights with trying to use newer upcoming technologies? I doubt I'm on the bleeding edge of technology, either. Are there "business reasons" that you would bring to light for using these technologies? Thanks in advance! Sorry for the long wall of text.

    Read the article

  • How to do an fetch request with expressions like this on the iPhone?

    - by dontWatchMyProfile
    The documentation has an example on how to retrieve simple values only, rather than managed objects. This remembers a lot SQL using aliases and functions to only retrieve calculated values. So, actually pretty geeky stuff. To get the minimum date from a bunch of records, this is used on the mac: NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:context]; [request setEntity:entity]; // Specify that the request should return dictionaries. [request setResultType:NSDictionaryResultType]; // Create an expression for the key path. NSExpression *keyPathExpression = [NSExpression expressionForKeyPath:@"creationDate"]; // Create an expression to represent the minimum value at the key path 'creationDate' NSExpression *minExpression = [NSExpression expressionForFunction:@"min:" arguments:[NSArray arrayWithObject:keyPathExpression]]; // Create an expression description using the minExpression and returning a date. NSExpressionDescription *expressionDescription = [[NSExpressionDescription alloc] init]; // The name is the key that will be used in the dictionary for the return value. [expressionDescription setName:@"minDate"]; [expressionDescription setExpression:minExpression]; [expressionDescription setExpressionResultType:NSDateAttributeType]; // Set the request's properties to fetch just the property represented by the expressions. [request setPropertiesToFetch:[NSArray arrayWithObject:expressionDescription]]; // Execute the fetch. NSError *error; NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // Handle the error. } else { if ([objects count] > 0) { NSLog(@"Minimum date: %@", [[objects objectAtIndex:0] valueForKey:@"minDate"]; } } [expressionDescription release]; [request release]; Nice, I though - but having a deep look into NSExpression -expressionForFunction:arguments: it turns out that iPhone OS does NOT support the min: function. Well, probably there's a nifty way to use an own function for this kind of stuff on the iPhone as well? Because on thing I'm already worrying about is, how I'm gonna sort a table based on the calculated distance of targets on a map (location-based stuff).

    Read the article

  • CodePlex Daily Summary for Saturday, October 19, 2013

    CodePlex Daily Summary for Saturday, October 19, 2013Popular ReleasesMedia Companion: Media Companion MC3.583b: As before release but fixed for no movie poster sourcesNew* Both - Added 'An' as option to ignore in title * Movie - Renaming - added %Z - Sorttitle to Legend * Movie - Renaming - added %O - Audio Channels to Legend * Movie - Remove a poster source from priority list. Reset List back to defaults. * Made Media Companion truly portable application. Fixed* Movie - browse for Poster Or Fanart, allows for jpg, tbn, png and bmp images * Movie - Alt Fanart Browser - Url or Browse window now fully...MoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...patterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.SQL Power Doc: Version 1.0.2.1: Misc. bug fixes Added logic to resolve members of a Windows Group server login Added columns to Excel workbooks to show definitions for server permissions, server roles, database permissions, and database rolesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyDotNetNuke® Wiki: 05.00.00: Changes made to better support upgrades and the removal of deprecated legacy files that were causing formatting issues. Updated the Version number to better indicate the significance of the C# migration and the new DNN 7.0.2 minimum requirement.TerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...C# Intellisense for Notepad++: Release v.1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.Collection Commander for Configuration Manager 2012: CMCollCtr 1.0.0: Change log: - MSI Setup - UI Improved - CM12 Console integration - New Powershell code snippets - Client Center IntegrationLINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...C++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeAD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Fast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...New Projectsag2: ag2ApplicationMobile: ApplicationMobileBing Maps/Geolocalization Windows Store Template: Template de aplicativo para Windows 8 utilizando APIs do Bing Maps e Geolocalização. C3D.NET: C3D.NET is a free class library for manipulating C3D file. This project supports .NET 2.0 and .NET 4.0, and provides a free Data Viewer based on this library.DrinksMachineCMC: DrinksMachineCMCFerdeenFatalSpeech: Speech recognition for PC control.FuzzyLogic: FuzzyLogic Demohousekeeping: ?????isolutions Techdays 2013: Zeigt die Möglichkeit des Zugriffes auf ein CRM 2013 Online mit einer Windows Store App und aus einem Word Plugin.Learn .NET Gadgeteer: .NET Gadgeteer example projects using the FEZ Cerberus kit from GHILIT.Logger.ServerChecker: Small application for server monitoring, which demonstrates capabilities of LIT.Logger (www.litlogger.com)LuaTools for Visual Studio: luatools lua visual studioMTM Test Plan Viewer: A UI tool that lets you easily view, explore and export MTM test results.NetworkScan: Network scan windowsObservable Linq: Observable Linq and Observable Expressions is about to change the paradigm how to use expressions and Linq, providing update events whenever the results change.PatientManagement: beep beeppescar-shop-El-enchufe-khristo-grecia-giselle: falta la aplicacion de LINQ...ProjetoEscola: Projeto Integrado Senac Proyecto Librería PRISA: Este sistema servirá para controlar las compras, ventas y almacén de librería "PRISA". Se desarrollará en Visual Studio 2012 con patrón MVC y en base a capas.TDataBase: Simple db query libraryzongtest: test

    Read the article

  • G++, compiler warnings, c++ templates

    - by Ian
    During the compilatiion of the C++ program those warnings appeared: c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bc:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2317: instantiated from `void std::partial_sort(_RandomAccessIterator, _RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Compare = sortObjects<double>]' c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2506: instantiated from `void std::__introsort_loop(_RandomAccessIterator, _RandomAccessIterator, _Size, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Size = int, _Compare = sortObjects<double>]' c:/MinGW/bin/../lib/gcc/mingw32/3.4.5/../../../../include/c++/3.4.5/bits/stl_algo.h:2589: instantiated from `void std::sort(_RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<Object<double>**, std::vector<Object<double>*, std::allocator<Object<double>*> > >, _Compare = sortObjects<double>]' io/../structures/objects/../../algorithm/analysis/../../structures/list/ObjectsList.hpp:141: instantiated from `void ObjectsList <T>::sortObjects(unsigned int, T, T, T, T, unsigned int) [with T = double]' I do not why, because all objects have only template parameter T, their local variables are also T. The only place, where I am using double is main. There are objects of type double creating and adding into the ObjectsList... Object <double> o1; ObjectsList <double> olist; olist.push_back(o1); .... T xmin = ..., ymin = ..., xmax = ..., ymax = ...; unsigned int n = ...; olist.sortAllObjects(xmin, ymin, xmax, ymax, n); and comparator template <class T> class sortObjects { private: unsigned int n; T xmin, ymin, xmax, ymax; public: sortObjects ( const T xmin_, const T ymin_, const T xmax_, const T ymax_, const int n_ ) : xmin ( xmin_ ), ymin ( ymin_ ), xmax ( xmax_ ), ymax ( ymax_ ), n ( n_ ) {} bool operator() ( const Object <T> *o1, const Object <T> *o2 ) const { T dmax = (std::max) ( xmax - xmin, ymax - ymin ); T x_max = ( xmax - xmin ) / dmax; T y_max = ( ymax - ymin ) / dmax; ... return ....; } representing ObjectsList method: template <class T> void ObjectsList <T> ::sortAllObjects ( const T xmin, const T ymin, const T xmax, const T ymax, const unsigned int n ) { std::sort ( objects.begin(), objects.end(), sortObjects <T> ( xmin, ymin, xmax, ymax, n ) ); }

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • Wondering about a way to conserve memory in C# using List<> with structs

    - by Michael Ryan
    I'm not even sure how I should phrase this question. I'm passing some CustomStruct objects as parameters to a class method, and storing them in a List. What I'm wondering is if it's possible and more efficient to add multiple references to a particular instance of a CustomStruct if a equivalent instance it found. This is a dummy/example struct: public struct CustomStruct { readonly int _x; readonly int _y; readonly int _w; readonly int _h; readonly Enum _e; } Using the below method, you can pass one, two, or three CustomStruct objects as parameters. In the final method (that takes three parameters), it may be the case that the 3rd and possibly the 2nd will have the same value as the first. List<CustomStruct> _list; public void AddBackground(CustomStruct normal) { AddBackground(normal, normal, normal); } public void AddBackground(CustomStruct normal, CustomStruct hover) { AddBackground(normal, hover, hover); } public void AddBackground(CustomStruct normal, CustomStruct hover, CustomStruct active) { _list = new List<CustomStruct>(3); _list.Add(normal); _list.Add(hover); _list.Add(active); } As the method stands now, I believe it will create new instances of CustomStruct objects, and then adds a reference of each to the List. It is my understanding that if I instead check for equality between normal and hover and (if equal) insert normal again in place of hover, when the method completes, hover will lose all references and eventually be garbage collected, whereas normal will have two references in the List. The same could be done for active. That would be better, right? The CustomStruct is a ValueType, and therefore one instance would remain on the Stack, and the three List references would just point to it. The overall List size is determined not by the object Type is contains, but by its Capacity. By eliminating the "duplicate" CustomStuct objects, you allow them to be cleaned up. When the CustomStruct objects are passed to these methods, new instances are created each time. When the structs are added to the List, is another copy made? For example, if i pass just one CustomStruct, AddBackground(normal) creates a copy of the original variable, and then passes it three times to Addbackground(normal, hover, active). In this method, three copies are made of the original copy. When the three local variables are added to the List using Add(), are additional copies created inside Add(), and does that defeat the purpose of any equality checks as previously mentioned? Am I missing anything here?

    Read the article

  • should I ever put a major version number into a C#/Java namespace?

    - by Andrew Patterson
    I am designing a set of 'service' layer objects (data objects and interface definitions) for a WCF web service (that will be consumed by third party clients i.e. not in-house, so outside my direct control). I know that I am not going to get the interface definition exactly right - and am wanting to prepare for the time when I know that I will have to introduce a breaking set of new data objects. However, the reality of the world I am in is that I will also need to run my first version simultaneously for quite a while. The first version of my service will have URL of http://host/app/v1service.svc and when the times comes by new version will live at http://host/app/v2service.svc However, when it comes to the data objects and interfaces, I am toying with putting the 'major' version of the interface number into the actual namespace of the classes. namespace Company.Product.V1 { [DataContract(Namespace = "company-product-v1")] public class Widget { [DataMember] string widgetName; } public interface IFunction { Widget GetWidgetData(int code); } } When the time comes for a fundamental change to the service, I will introduce some classes like namespace Company.Product.V2 { [DataContract(Namespace = "company-product-v2")] public class Widget { [DataMember] int widgetCode; [DataMember] int widgetExpiry; } public interface IFunction { Widget GetWidgetData(int code); } } The advantages as I see it are that I will be able to have a single set of code serving both interface versions, sharing functionality where possible. This is because I will be able to reference both interface versions as a distinct set of C# objects. Similarly, clients may use both interface versions simultaneously, perhaps using V1.Widget in some legacy code whilst new bits move on to V2.Widget. Can anyone tell why this is a stupid idea? I have a nagging feeling that this is a bit smelly.. notes: I am obviously not proposing every single new version of the service would be in a new namespace. Presumably I will do as many non-breaking interface changes as possible, but I know that I will hit a point where all the data modelling will probably need a significant rewrite. I understand assembly versioning etc but I think this question is tangential to that type of versioning. But I could be wrong.

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Q&amp;A: Will my favourite ORM Foo work with SQL Azure?

    - by Eric Nelson
    short answer: Quite probably, as SQL Azure is very similar to SQL Server longer answer: Object Relational Mappers (ORMs) that work with SQL Server are likely but not guaranteed to work with SQL Azure. The differences between the RDBMS versions are small – but may cause problems, for example in tools used to create the mapping between objects and tables or in generated SQL from the ORM which expects “certain things” :-) More specifically: ADO.NET Entity Framework / LINQ to Entities can be used with SQL Azure, but the Visual Studio designer does not currently work. You will need to point the designer at a version of your database running of SQL Server to create the mapping, then change the connection details to run against SQL Azure. LINQ to SQL has similar issues to ADO.NET Entity Framework above NHibernate can be used against SQL Azure DevExpress XPO supports SQL Azure from version 9.3 DataObjects.Net supports SQL Azure Open Access from Telerik works “seamlessly”  - their words not mine :-) The list above is by no means comprehensive – please leave a comment with details of other ORMs that work (or do not work) with SQL Azure. Related Links: General guidelines and limitations of SQL Azure SQL Azure vs SQL Server

    Read the article

  • How can I modify my classes to use it's collections in WPF TreeView

    - by Victor
    Hello, i'am trying to modify my objects to make hierarchical collection model. I need help. My objects are Good and GoodCategory: public class Good { int _ID; int _GoodCategory; string _GoodtName; public int ID { get { return _ID; } } public int GoodCategory { get { return _GoodCategory; } set { _GoodCategory = value; } } public string GoodName { get { return _GoodName; } set { _GoodName = value; } } public Good(IDataRecord record) { _ID = (int)record["ID"]; _GoodtCategory = (int)record["GoodCategory"]; } } public class GoodCategory { int _ID; string _CategoryName; public int ID { get { return _ID; } } public string CategoryName { get { return _CategoryName; } set { _CategoryName = value; } } public GoodCategory(IDataRecord record) { _ID = (int)record["ID"]; _CategoryName = (string)record["CategoryName"]; } } And I have two Collections of these objects: public class GoodsList : ObservableCollection<Good> { public GoodsList() { string goodQuery = @"SELECT `ID`, `ProductCategory`, `ProductName`, `ProductFullName` FROM `products`;"; using (MySqlConnection conn = ConnectToDatabase.OpenDatabase()) { if (conn != null) { MySqlCommand cmd = conn.CreateCommand(); cmd.CommandText = productQuery; MySqlDataReader rdr = cmd.ExecuteReader(); while (rdr.Read()) { Add(new Good(rdr)); } } } } } public class GoodCategoryList : ObservableCollection<GoodCategory> { public GoodCategoryList () { string goodQuery = @"SELECT `ID`, `CategoryName` FROM `product_categoryes`;"; using (MySqlConnection conn = ConnectToDatabase.OpenDatabase()) { if (conn != null) { MySqlCommand cmd = conn.CreateCommand(); cmd.CommandText = productQuery; MySqlDataReader rdr = cmd.ExecuteReader(); while (rdr.Read()) { Add(new GoodCategory(rdr)); } } } } } So I have two collections which takes data from the database. But I want to use thats collections in the WPF TreeView with HierarchicalDataTemplate. I saw many post's with examples of Hierarlichal Objects, but I steel don't know how to make my objects hierarchicaly. Please help.

    Read the article

  • How and what setting do you use in order to create your customized application with database? [closed]

    - by FullmetalBoy
    I have a homemade project combining with application and database. Today I'm using architecture N-tier. Totally I have 4 project in my solution in VS2010. The fourth project is a transaction layer that connect all the 3 project (one for presentation, business logic and database layer) in order to retrieve data from the database and all the way to present the data in presentation layer. Transaction layer contains entity framework 4 + customized class to carry data to the presentation layer. I always use LINQ to retrieve data in database layer. From my experience, everytime I use LINQ in relation with entity framework it always take a lot of time to retrieve data because what I believe is that my entity framework always has to reload everytime when I want to retrieve any data from the database. My question for you guys is: When you create your application connecting to a database, what architecture do you use? How do you retrieve the data from you application? Is it entity framework etc?

    Read the article

  • How to generate SPMetal for a specific list (OOTB: like tasks or contacts) with custom columns

    - by KunaalKapoor
    SPMetal is used to make use of LINQ on a list in SharePoint 2010. By default when you generate SPMetal on a site you will get a code generated file for most of the lists and probably more. Here is a MSDN link for some info on SPMetal.http://msdn.microsoft.com/en-us/library/ee538255(office.14).aspxBut what if you want only to generate the code for one list?Well it is quite simple once you figure it out. You need to add an xml file to override the default settings of SPMetal and specify it in the /parameters option. I will show you how to do this.First create a Folder that will contain two files (GenerateSPMetalCode.bat and SPMetal.xml).Below is the content of the files:GenerateSPMetalCode.bat "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml pause SPMetal.xml <?xml version="1.0" encoding="utf-8"?> <Web AccessModifier="Internal" xmlns="http://schemas.microsoft.com/SharePoint/2009/spmetal"> <List Name="ListName"> <ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List> <ExcludeOtherLists></ExcludeOtherLists> </Web> You will have to change some of the text in the files so that it will be specific to your SharePoint Server Setup. In the bat file you will have to change http://YourServer to the url of the web where your list is. In the SPMetal.xml file you need to change ListName to the name of your list and the ContentTypeName to the name of the content type you want to extract. The GeneratedClassName can be anything but perhaps you should rename it to something more sensible.Adding the following line: '<List Name="ListName"><ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List>'  makes sure that any custom columns added to an OOTB list like contacts or tasks are also generated, which are missed out in a regular generation.So now when you run it the SPMetal command will read the SPMetal.xml list and override its commands. ExcludeOtherLists element makes it so that only the code for the lists you specify will be generated. For some reason I got an error if I had this element above the List element.You sould now have a code file called OutPutFileName.cs that has been generated. You can now put this in your SharePoint project for use with your LINQ queries against that list.I will soon write a LINQ example that uses the generated class. UPDATE: Add the /namespace parameter to add a namespace to the generated code. "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /namespace:MySPMetalNameSpace /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml

    Read the article

  • Visual Studio 2010 RC and Entity Framework 4 RC Support in the New Version of ADO.NET Data Providers

    Devart has recently announced the release of dotConnect products for Oracle, MySQL, PostgreSQL, and SQLite - ADO.NET providers that offer Entity Framework support, LINQ to SQL support, and contain an ORM model designer for developing LINQ to SQL and EF models based on different database engines. New dotConnect ADO.NET providers offer complete support for Visual Studio 2010 Release Candidate and Entity Framework 4 Release Candidate. Entity Developer 2.80, a designer for modeling and code generation...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • EXC_MEMORY_ACCESS when trying to delete from Core Data ($cash solution)

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements items from the xml data. This method looks like this: -(void) emptyDataContext { NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; DebugLog(@"ERROR: %@",error); DebugLog(@"RETRIEVED: %@", conditions); [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } // Update the data model effectivly removing the objects we removed above. //NSError *error; if (![managedObjectContext save:&error]) { DebugLog(@"%@", [error domain]); } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the proceeding with the parse & build. All is well until its time to delete the core data objects. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. Captured errors return null. If I log the 'conditions' array I get a list of NSManagedObjects on the first run. On the second this log request causes a crash exactly as the deleteObject call does. I have a feeling it is something very simple I'm missing or not doing correctly to cause this behavior. The data works great on my tableviews - its only when trying to update I get the crashes. I have spent days & days on this trying numerous alternative methods. Whats left of my hair is falling out. I'd be willing to ante up some cash for anyone willing to look at my code and see what I'm doing wrong. Just need to get past this hurdle. Thanks in advance for the help!

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • nodejs async.waterfall method

    - by user1513388
    Update 2 Complete code listing var request = require('request'); var cache = require('memory-cache'); var async = require('async'); var server = '172.16.221.190' var user = 'admin' var password ='Passw0rd' var dn ='\\VE\\Policy\\Objects' var jsonpayload = {"Username": user, "Password": password} async.waterfall([ //Get the API Key function(callback){ request.post({uri: 'http://' + server +'/sdk/authorize/', json: jsonpayload, headers: {'content_type': 'application/json'} }, function (e, r, body) { callback(null, body.APIKey); }) }, //List the credential objects function(apikey, callback){ var jsonpayload2 = {"ObjectDN": dn, "Recursive": true} request.post({uri: 'http://' + server +'/sdk/Config/enumerate?apikey=' + apikey, json: jsonpayload2, headers: {'content_type': 'application/json'} }, function (e, r, body) { var dns = []; for (var i = 0; i < body.Objects.length; i++) { dns.push({'name': body.Objects[i].Name, 'dn': body.Objects[i].DN}) } callback(null, dns, apikey); }) }, function(dns, apikey, callback){ // console.log(dns) var cb = []; for (var i = 0; i < dns.length; i++) { //Retrieve the credential var jsonpayload3 = {"CredentialPath": dns[i].dn, "Pattern": null, "Recursive": false} console.log(dns[i].dn) request.post({uri: 'http://' + server +'/sdk/credentials/retrieve?apikey=' + apikey, json: jsonpayload3, headers: {'content_type': 'application/json'} }, function (e, r, body) { // console.log(body) cb.push({'cl': body.Classname}) callback(null, cb, apikey); console.log(cb) }); } } ], function (err, result) { // console.log(result) // result now equals 'done' }); Update: I'm building a small application that needs to make multiple HTTP calls to a an external API and amalgamates the results into a single object or array. e.g. Connect to endpoint and get auth key - pass auth key to step 2 Connect to endpoint using auth key and get JSON results - create an object containing summary results and pass to step 3. Iterate over passed object summary results and call API for each item in the object to get detailed information for each summary line Create a single JSON data structure that contains the summary and detail information. The original question below outlines what I've tried so far! Original Question: Will the async.waterfall method support multiple callbacks? i.e. Iterate over an array thats passed from a previous item in the chain, then invoke multiple http requests each of which would have their own callbacks. e.g, sync.waterfall([ function(dns, key, callback){ var cb = []; for (var i = 0; i < dns.length; i++) { //Retrieve the credential var jsonpayload3 = {"Cred": dns[i].DN, "Pattern": null, "Recursive": false} console.log(dns[i].DN) request.post({uri: 'http://' + vedserver +'/api/cred/retrieve?apikey=' + key, json: jsonpayload3, headers: {'content_type': 'application/json'} }, function (e, r, body) { console.log(body) cb.push({'cl': body.Classname}) callback(null, cb, key); }); } }

    Read the article

  • Is it any loose coupling mechanism in Objective-C + Cocoa like C# delegates or C++Qt signals+slots?

    - by Eye of Hell
    Hello. For a large programs, the standard way to chalenge a complexity is to divide a program code into small objects. Most of the actual programming languages offer this functionality via classes, so is Objective-C. But after source code is separated into small object, the second challenge is to somehow connect them with each over. Standard approaches, supported by most languages are compositon (one object is a member field of another), inheritance, templates (generics) and callbacks. More cryptic techniques include method-level delagates (C#) and signals+slots (C++Qt). I like the delegates / signals idea, since while connecting two objects i can connect individual methods with each over, without objects knowing anything of each over. For C#, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); object1.SomethingHappened += object2.HandleSomething; In this code, is object1 calls it's SomethingHappened delegate (like a normal method call) the HandleSomething method of object2 will be called. For C++Qt, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); connect( object1, SIGNAL(SomethingHappened()), object2, SLOT(HandleSomething()) ); The result will be exactly the same. This technique has some advantages and disadvantages, but generally i like it more than interfaces since if program code base grows i can change connections and add new ones without creating tons of interfaces. After examination of Objective-C i havn't found any way to use this technique i like :(. It seems that Objective-C supports message passing perfectly well, but it requres for object1 to have a pointer to object2 in order to pass it a message. If some object needs to be connected to lots of other objects, in Objective-C i will be forced to give him pointers to each of the objects it must be connected. So, the question :). Is it any approach in Objective-C programming that will closely resemble delegate / signal+slot types of connection, not a 'give first object an entire pointer to second object so it can pass a message to it'. Method-level connections are a bit more preferable to me than object-level connection ^_^.

    Read the article

  • Java Flow Control Problem

    - by Kyle_Solo
    I am programming a simple 2d game engine. I've decided how I'd like the engine to function: it will be composed of objects containing "events" that my main game loop will trigger when appropriate. A little more about the structure: Every GameObject has an updateEvent method. objectList is a list of all the objects that will receive update events. Only objects on this list have their updateEvent method called by the game loop. I’m trying to implement this method in the GameObject class (This specification is what I’d like the method to achieve): /** * This method removes a GameObject from objectList. The GameObject * should immediately stop executing code, that is, absolutely no more * code inside update events will be executed for the removed game object. * If necessary, control should transfer to the game loop. * @param go The GameObject to be removed */ public void remove(GameObject go) So if an object tries to remove itself inside of an update event, control should transfer back to the game engine: public void updateEvent() { //object's update event remove(this); System.out.println("Should never reach here!"); } Here’s what I have so far. It works, but the more I read about using exceptions for flow control the less I like it, so I want to see if there are alternatives. Remove Method public void remove(GameObject go) { //add to removedList //flag as removed //throw an exception if removing self from inside an updateEvent } Game Loop for(GameObject go : objectList) { try { if (!go.removed) { go.updateEvent(); } else { //object is scheduled to be removed, do nothing } } catch(ObjectRemovedException e) { //control has been transferred back to the game loop //no need to do anything here } } // now remove the objects that are in removedList from objectList 2 questions: Am I correct in assuming that the only way to implement the stop-right-away part of the remove method as described above is by throwing a custom exception and catching it in the game loop? (I know, using exceptions for flow control is like goto, which is bad. I just can’t think of another way to do what I want!) For the removal from the list itself, it is possible for one object to remove one that is farther down on the list. Currently I’m checking a removed flag before executing any code, and at the end of each pass removing the objects to avoid concurrent modification. Is there a better, preferably instant/non-polling way to do this?

    Read the article

  • Hibernate/Spring: getHibernateTemplate().save(...) Freezes/Hangs

    - by ashes999
    I'm using Hibernate and Spring with the DAO pattern (all Hibernate dependencies in a *DAO.java class). I have nine unit tests (JUnit) which create some business objects, save them, and perform operations on them; the objects are in a hash (so I'm reusing the same objects all the time). My JUnit setup method calls my DAO.deleteAllObjects() method which calls getSession().createSQLQuery("DELETE FROM <tablename>").executeUpdate() for my business object table (just one). One of my unit tests (#8/9) freezes. I presumed it was a database deadlock, because the Hibernate log file shows my delete statement last. However, debugging showed that it's simply HibernateTemplate.save(someObject) that's freezing. (Eclipse shows that it's freezing on HibernateTemplate.save(Object), line 694.) Also interesting to note is that running this test by itself (not in the suite of 9 tests) doesn't cause any problems. How on earth do I troubleshoot and fix this? Also, I'm using @Entity annotations, if that matters. Edit: I removed reuse of my business objects (use unique objects in every method) -- didn't make a difference (still freezes). Edit: This started trickling into other tests, too (can't run more than one test class without getting something freezing) Transaction configuration: <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource" /> </bean> <tx:advice id="txAdvice" transaction-manager="txManager"> <!-- the transactional semantics... --> <tx:attributes> <!-- all methods starting with 'get' are read-only --> <tx:method name="get*" read-only="true" /> <tx:method name="find*" read-only="true" /> <!-- other methods use the default transaction settings (see below) --> <tx:method name="*" /> </tx:attributes> </tx:advice> <!-- my bean which is exhibiting the hanging behavior --> <aop:config> <aop:pointcut id="beanNameHere" expression="execution(* com.blah.blah.IMyDAO.*(..))" /> <aop:advisor advice-ref="txAdvice" pointcut-ref="beanNameHere" /> </aop:config>

    Read the article

  • Squid w/ SquidGuard fails w/ "Too few redirector processes are running"

    - by DKNUCKLES
    I'm trying to implement a Squid proxy in a quick and easy fashion and I'm receiving some errors I have been unable to resolve. The box is a pre-made appliance, however it seems to fail on launch.The following is the cache.log file when I attempt to launch the squid service. 2012/11/18 22:14:29| Starting Squid Cache version 3.0.STABLE20-20091201 for i686 -pc-linux-gnu... 2012/11/18 22:14:29| Process ID 12647 2012/11/18 22:14:29| With 1024 file descriptors available 2012/11/18 22:14:29| Performing DNS Tests... 2012/11/18 22:14:29| Successful DNS name lookup tests... 2012/11/18 22:14:29| DNS Socket created at 0.0.0.0, port 40513, FD 8 2012/11/18 22:14:29| Adding nameserver 192.168.0.78 from /etc/resolv.conf 2012/11/18 22:14:29| Adding nameserver 8.8.8.8 from /etc/resolv.conf 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'bin' processes 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'squid-auth.pl' processes 2012/11/18 22:14:29| User-Agent logging is disabled. 2012/11/18 22:14:29| Referer logging is disabled. 2012/11/18 22:14:29| Unlinkd pipe opened on FD 23 2012/11/18 22:14:29| Swap maxSize 10240000 + 8192 KB, estimated 788322 objects 2012/11/18 22:14:29| Target number of buckets: 39416 2012/11/18 22:14:29| Using 65536 Store buckets 2012/11/18 22:14:29| Max Mem size: 8192 KB 2012/11/18 22:14:29| Max Swap size: 10240000 KB 2012/11/18 22:14:29| Version 1 of swap file with LFS support detected... 2012/11/18 22:14:29| Rebuilding storage in /opt/squid3/var/cache (DIRTY) 2012/11/18 22:14:29| Using Least Load store dir selection 2012/11/18 22:14:29| Set Current Directory to /opt/squid3/var/cache 2012/11/18 22:14:29| Loaded Icons. 2012/11/18 22:14:29| Accepting HTTP connections at 10.0.0.6, port 3128, FD 25. 2012/11/18 22:14:29| Accepting ICP messages at 0.0.0.0, port 3130, FD 26. 2012/11/18 22:14:29| HTCP Disabled. 2012/11/18 22:14:29| Ready to serve requests. 2012/11/18 22:14:29| Done reading /opt/squid3/var/cache swaplog (0 entries) 2012/11/18 22:14:29| Finished rebuilding storage from disk. 2012/11/18 22:14:29| 0 Entries scanned 2012/11/18 22:14:29| 0 Invalid entries. 2012/11/18 22:14:29| 0 With invalid flags. 2012/11/18 22:14:29| 0 Objects loaded. 2012/11/18 22:14:29| 0 Objects expired. 2012/11/18 22:14:29| 0 Objects cancelled. 2012/11/18 22:14:29| 0 Duplicate URLs purged. 2012/11/18 22:14:29| 0 Swapfile clashes avoided. 2012/11/18 22:14:29| Took 0.02 seconds ( 0.00 objects/sec). 2012/11/18 22:14:29| Beginning Validation Procedure 2012/11/18 22:14:29| WARNING: redirector #1 (FD 9) exited 2012/11/18 22:14:29| WARNING: redirector #2 (FD 10) exited 2012/11/18 22:14:29| WARNING: redirector #3 (FD 11) exited 2012/11/18 22:14:29| WARNING: redirector #4 (FD 12) exited 2012/11/18 22:14:29| Too few redirector processes are running FATAL: The redirector helpers are crashing too rapidly, need help! Squid Cache (Version 3.0.STABLE20-20091201): Terminated abnormally. CPU Usage: 0.112 seconds = 0.032 user + 0.080 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): total space in arena: 2944 KB Ordinary blocks: 2857 KB 6 blks Small blocks: 0 KB 0 blks Holding blocks: 1772 KB 8 blks Free Small blocks: 0 KB Free Ordinary blocks: 86 KB Total in use: 4629 KB 157% Total free: 86 KB 3% The "permission denied" area is where I have been focusing my attention with no luck. The following is what I've tried. Chmod'ing the /opt/squidguard/bin folder to 777 Changing the user that squidguard runs under to root / nobody / www-data / squid3 Tried changing ownership of the /opt/squidguard/bin folder to all names listed above after assigning that user to run with squid. Any help with this would be greatly appreciated.

    Read the article

  • Measuring ASP.NET and SharePoint output cache

    - by DigiMortal
    During ASP.NET output caching week in my local blog I wrote about how to measure ASP.NET output cache. As my posting was based on real work and real-life results then I thought that this posting is maybe interesting to you too. So here you can read what I did, how I did and what was the result. Introduction Caching is not effective without measuring it. As MVP Henn Sarv said in one of his sessions then you will get what you measure. And right he is. Lately I measured caching on local Microsoft community portal to make sure that our caching strategy is good enough in environment where this system lives. In this posting I will show you how to start measuring the cache of your web applications. Although the application measured is built on SharePoint Server publishing infrastructure, all those counters have same meaning as similar counters under pure ASP.NET applications. Measured counters I used Performance Monitor and the following performance counters (their names are similar on ASP.NET and SharePoint WCMS): Total number of objects added – how much objects were added to output cache. Total object discards – how much objects were deleted from output cache. Cache hit count – how many times requests were served by cache. Cache hit ratio – percent of requests served from cache. The first three counters are cumulative while last one is coefficient. You can use also other counters to measure the full effect of caching (memory, processor, disk I/O, network load etc before and after caching). Measuring process The measuring I describe here started from freshly restarted web server. I measured application during 12 hours that covered also time ranges when users are most active. The time range does not include late evening hours and night because there is nothing to measure during these hours. During measuring we performed no maintenance or administrative tasks on server. All tasks performed were related to usual daily content management and content monitoring. Also we had no advertisement campaigns or other promotions running at same time. The results You can see the results on following graphic.   Total number of objects added   Total object discards   Cache hit count   Cache hit ratio You can see that adds and discards are growing in same tempo. It is good because cache expires and not so popular items are not kept in memory. If there are more popular content then the these lines may have bigger distance between them. Cache hit count grows faster and this shows that more and more content is served from cache. In current case it shows that cache is filled optimally and we can do even better if we tune caches more. The site contains also pages that are discarded when some subsite changes (page was added/modified/deleted) and one modification may affect about four or five pages. This may also decrease cache hit count because during day the site gets about 5-10 new pages. Cache hit ratio is currently extremely good. The suggested minimum is about 85% but after some tuning and measuring I achieved 98.7% as a result. This is due to the fact that new pages are most often requested and after new pages are added the older ones are requested only sometimes. So they get discarded from cache and only some of these will return sometimes back to cache. Although this may also indicate the need for additional SEO work the result is very well in technical means. Conclusion Measuring ASP.NET output cache is not complex thing to do and you can start by measuring performance of cache as a start. Later you can move on and measure caching effect to other counters such as disk I/O, network, processors etc. What you have to achieve is optimal cache that is not full of items asked only couple of times per day (you can avoid this by not using too long cache durations). After some tuning you should be able to boost cache hit ratio up to at least 85%.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >