Search Results

Search found 5124 results on 205 pages for 'clean'.

Page 17/205 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Output redirection still with colors in PowerShell

    - by stej
    Suppose I run msbuild like this: function Clean-Sln { param($sln) MSBuild.exe $sln /target:Clean } Clean-Sln c:\temp\SO.sln In Posh console the output is in colors. That's pretty handy - you spot colors just by watching the output. And e.g. not important messages are grey. Question I'd like to add ability to redirect it somewhere like this (simplified example): function Clean-Sln { param($sln) MSBuild.exe $sln /target:Clean | Redirect-AccordingToRedirectionVariable } $global:Redirection = 'Console' Clean-Sln c:\temp\SO.sln $global:Redirection = 'TempFile' Clean-Sln c:\temp\Another.sln If I use 'Console', the cmdlet/function Redirect-AccordingToRedirectionVariable should output the msbuild messages with colors the same way as the output was not piped. In other words - it should leave the output as it is. If I use 'TempFile', Redirect-AccordingToRedirectionVariable will store the output in a temp file. Is it even possible? I guess it is not :| Or do you have any advice how to achieve the goal? Possible solution: if ($Redirection -eq 'Console) { MSBuild.exe $sln /target:Clean | Redirect-AccordingToRedirectionVariable } else { MSBuild.exe $sln /target:Clean | Out-File c:\temp.txt } But if you imagine there can be many many msbuild calls, it's not ideal. Don't be shy to tell me any new suggestion how to cope with it ;) Any background info about redirections/coloring/outpu is welcome as well. (The problem is not msbuild specific, the problem touches any application that writes colored output)

    Read the article

  • How do you clean Core Data generated models and code from a project?

    - by Hazmit
    I'm having an extremely annoying problem with Core Data in the iPhone SDK. I would say in general Core Data for the most part appears easy to use and nice to implement. I have a sqlite database that is being used as a read only reference to pull data elements out for an iPhone app. It would seem there are really mysterious issues relating to what seems to be migration of the database to the most recent versions of my schema. Why can't you just clean out your stored objects and models and let a project redo all of it when you compile next? You would think if you setup a stored object model there would be a way to just reset it and recompile. I've tried what feels like a thousand 'tips' that have been the results of hours of google searches and documentation prowling to figure out how to do this. My most recent error during compile time is below. 2010-04-07 18:23:51.891 PE[1962:207] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Can't merge models with two different entities named 'PElement'' All of this code has been working in the simulator and is only causing me troubles now because I made a change to the schema. I also have the database options for automatically migrating set as below. NSMutableDictionary *optionsDictionary = [NSMutableDictionary dictionary]; [optionsDictionary setObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; [optionsDictionary setObject:[NSNumber numberWithBool:YES] forKey:NSInferMappingModelAutomaticallyOption];

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • xecvp: ./chk: Permission denied

    - by Hiranth
    I'm trying to install Xen 4.0.1 on Ubuntu 10.10. When i run the "make world" it gives the following error at the end.... make -C check clean make[4]: Entering directory `/home/hirantha/xen-4.0.1/tools/check' ./chk clean make[4]: execvp: ./chk: Permission denied make[4]: * [clean] Error 127 make[4]: Leaving directory `/home/hirantha/xen-4.0.1/tools/check' make[3]: * [subdir-clean-check] Error 2 make[3]: Leaving directory `/home/hirantha/xen-4.0.1/tools' make[2]: * [subdirs-clean] Error 2 make[2]: Leaving directory `/home/hirantha/xen-4.0.1/tools' make[1]: * [clean] Error 2 make[1]: Leaving directory `/home/hirantha/xen-4.0.1' make: * [world] Error 2 Why is that? Please help me to solve this....

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Technique/class to clean invalid XML in AS3/Flex 4?

    - by Quentin
    I'm looking for a way to load invalid (malformed) XML into an AS3 XML object. Do you know a class or a technique to do so? I have to load malformed HTML and parse it as XML. This is a Flex project so I can use Flex specific classes if needed! I thought of using the HTMLLoader since it accepts all kinds of malformed HTML and renders it properly but couldn't get anything to work...

    Read the article

  • Are there clean ways to do these Quartz Triggers?

    - by Ryan Elkins
    I'm using Quartz to schedule some jobs but I have a few scenarios that I'm not sure how to resolve. 1) Lets say I have a job that is scheduled to run every 5 minutes. Generally that works well but periodically the job takes more than 5 minutes and I don't really want multiple instances of the job running simultaneously. 2) I have a job that can take between 1 and 60 minutes to complete. I want it to run continuously but pause for 10 minutes between runs, regardless of how long it took previously. I like using Quartz for this rather than some sort of loop because if a job crashes Quartz will still spin up a new one based on the schedule. I am using Quartz in Java right now.

    Read the article

  • Is there a clean way to determine RSS version?

    - by Fenick
    Is it possible to determine the feed type and version in a way so that you can make sure that you have the correct version. At the lowest level so to speak. Namespaces is an obvious approach, but its not present for a lot of feeds. Any thoughts? (I'm trying to mashup varioius RSS Feeds). Thanks in advance for any help!

    Read the article

  • c#: Clean way to fit a collection into a multidimensional array?

    - by Rosarch
    I have an ICollection<MapNode>. Each MapNode has a Position attribute, which is a Point. I want to sort these points first by Y value, then by X value, and put them in a multidimensional array (MapNode[,]). The collection would look something like this: (30, 20) (20, 20) (20, 30) (30, 10) (30, 30) (20, 10) And the final product: (20, 10) (20, 20) (20, 30) (30, 10) (30, 20) (30, 30) Here is the code I have come up with to do it. Is this hideously unreadable? I feel like it's more hacky than it needs to be. private Map createWorldPathNodes() { ICollection<MapNode> points = new HashSet<MapNode>(); Rectangle worldBounds = WorldQueryUtils.WorldBounds(); for (float x = worldBounds.Left; x < worldBounds.Right; x += PATH_NODE_CHUNK_SIZE) { for (float y = worldBounds.Y; y > worldBounds.Height; y -= PATH_NODE_CHUNK_SIZE) { // default is that everywhere is navigable; // a different function is responsible for determining the real value points.Add(new MapNode(true, new Point((int)x, (int)y))); } } int distinctXValues = points.Select(node => node.Position.X).Distinct().Count(); int distinctYValues = points.Select(node => node.Position.Y).Distinct().Count(); IList<MapNode[]> mapNodeRowsToAdd = new List<MapNode[]>(); while (points.Count > 0) // every iteration will take a row out of points { // get all the nodes with the greatest Y value currently in the collection int currentMaxY = points.Select(node => node.Position.Y).Max(); ICollection<MapNode> ythRow = points.Where(node => node.Position.Y == currentMaxY).ToList(); // remove these nodes from the pool we're picking from points = points.Where(node => ! ythRow.Contains(node)).ToList(); // ToList() is just so it is still a collection // put the nodes with max y value in the array, sorting by X value mapNodeRowsToAdd.Add(ythRow.OrderByDescending(node => node.Position.X).ToArray()); } MapNode[,] mapNodes = new MapNode[distinctXValues, distinctYValues]; int xValuesAdded = 0; int yValuesAdded = 0; foreach (MapNode[] mapNodeRow in mapNodeRowsToAdd) { xValuesAdded = 0; foreach (MapNode node in mapNodeRow) { // [y, x] may seem backwards, but mapNodes[y] == the yth row mapNodes[yValuesAdded, xValuesAdded] = node; xValuesAdded++; } yValuesAdded++; } return pathNodes; } The above function seems to work pretty well, but it hasn't been subjected to bulletproof testing yet.

    Read the article

  • Is this a clean way to manage AsyncResults with Generic Methods?

    - by Michael Stum
    I've contributed Async Support to a Project I'm using, but I made a bug which I'm trying to fix. Basically I have this construct: private readonly Dictionary<WaitHandle, object> genericCallbacks = new Dictionary<WaitHandle, object>(); public IAsyncResult BeginExecute<T>(RestRequest request, AsyncCallback callback, object state) where T : new() { var genericCallback = new RequestExecuteCaller<T>(this.Execute<T>); var asyncResult = genericCallback.BeginInvoke(request, callback, state); genericCallbacks[asyncResult.AsyncWaitHandle] = genericCallback; return asyncResult; } public RestResponse<T> EndExecute<T>(IAsyncResult asyncResult) where T : new() { var cb = genericCallbacks[asyncResult.AsyncWaitHandle] as RequestExecuteCaller<T>; genericCallbacks.Remove(asyncResult.AsyncWaitHandle); return cb.EndInvoke(asyncResult); } So I have a generic BeginExecute/EndExecute method pair. As I need to store the delegate that is called on EndExecute somewhere I created a dictionary. I'm unsure about using WaitHandles as keys though, but that seems to be the only safe choice. Does this approach make sense? Are WaitHandles unique or could I have two equal ones? Or should I instead use the State (and wrap any user provided state into my own State value)? Just to add, the class itself is non-generic, only the Begin/EndExecute methods are generic.

    Read the article

  • How can I do a clean Mod_Rewrite that hides the variable numbers passed in the query string but just

    - by Jay Bee
    Hi, I have been developing web applications for a while now. My applications have been fairing poorly in search engine results because of the dynamic links that my websites generate. I admire the way some developers do their mod_rewrite to produce something like: http://www.mycompany.com/accommodation/europe/ to run a substitute of "index.php?category_id=2&country=23" How can I achieve that in my urls? Warm regards, JB

    Read the article

  • Nice, clean, simple way of getting a dataset from ASP.NET to plain HTML jQuery or JavaScript library

    - by David S
    I know this is a probable open ended question, and I have tried looking around so much over the last year or two... maybe I am looking for a perfect place that doesn't exist! of course it's all about perception no less.. Anyway, just to clarify what I am trying to do and why: I want to be able to use (primarily for the moment) ASP.NET or services thereof to get a dataset - whatever the source data, I can obviously get a dataset of rows/Columns. I want to be able to, as simply as possible, get that data over to the client via xml/json/whatever, to then use in a "variety" of ways. "Variety" of ways meaning I would like to "easily" bind that data to say a grid, or a combo dropdown or just simply render to a textbox - BUT by referencing the dataset as I would say on the serverside. Now I know this all sounds simplistic, and I know there are lots of complications.. so I have tried the following so far over the last year or so: ExtJS - very good, nice solid framework, but just found it a bit too much to use in everyday basic apps - great if I was building a whole application with it Yahoo YUI - not looked recently, but I guess some of the concepts with ExtJS were similar? JQuery - of course to get data etc, it was ok, and I guess there are so many 3rd party plugins, that a mix and match might work? Adobe SPRY - ironically this was as close to getting a dataset style structure to Javascript/client, although it seemed to drop off/go quiet..? I maybe wrong.. I did have a very cursory play with Tibco GI and another one I cannot remember the name of! but again, it felt like it was great to build a whole app perhaps? Anyway, I am very amazed by all of the technologies coming out, and really not biased one way or the other, I really just want a very simple way of getting data from the server, and having a basic/very flexible way of working with that data in the client without using server technologies.. I need to keep the server flexible as I may need to use PHP, or java technologies not just .NET So again, sorry for the rambles, but if anyone out there has had a simple experience, or would like to share some ideas, it would be very welcomed!! David.

    Read the article

  • Double Buffering for Game objects, what's a nice clean generic C++ way?

    - by Gary
    This is in C++. So, I'm starting from scratch writing a game engine for fun and learning from the ground up. One of the ideas I want to implement is to have game object state (a struct) be double-buffered. For instance, I can have subsystems updating the new game object data while a render thread is rendering from the old data by guaranteeing there is a consistent state stored within the game object (the data from last time). After rendering of old and updating of new is finished, I can swap buffers and do it again. Question is, what's a good forward-looking and generic OOP way to expose this to my classes while trying to hide implementation details as much as possible? Would like to know your thoughts and considerations. I was thinking operator overloading could be used, but how do I overload assign for a templated class's member within my buffer class? for instance, I think this is an example of what I want: doublebuffer<Vector3> data; data.x=5; //would write to the member x within the new buffer int a=data.x; //would read from the old buffer's x member data.x+=1; //I guess this shouldn't be allowed If this is possible, I could choose to enable or disable double-buffering structs without changing much code. This is what I was considering: template <class T> class doublebuffer{ T T1; T T2; T * current=T1; T * old=T2; public: doublebuffer(); ~doublebuffer(); void swap(); operator=()?... }; and a game object would be like this: struct MyObjectData{ int x; float afloat; } class MyObject: public Node { doublebuffer<MyObjectData> data; functions... } What I have right now is functions that return pointers to the old and new buffer, and I guess any classes that use them have to be aware of this. Is there a better way?

    Read the article

  • What's a clean way to break up a DataTable into chunks of a fixed size with Linq?

    - by Michael Haren
    Update: Here's a similar question Suppose I have a DataTable with a few thousand DataRows in it. I'd like to break up the table into chunks of smaller rows for processing. I thought C#3's improved ability to work with data might help. This is the skeleton I have so far: DataTable Table = GetTonsOfData(); // Chunks should be any IEnumerable<Chunk> type var Chunks = ChunkifyTableIntoSmallerChunksSomehow; // ** help here! ** foreach(var Chunk in Chunks) { // Chunk should be any IEnumerable<DataRow> type ProcessChunk(Chunk); } Any suggestions on what should replace ChunkifyTableIntoSmallerChunksSomehow? I'm really interested in how someone would do this with access C#3 tools. If attempting to apply these tools is inappropriate, please explain! Update 3 (revised chunking as I really want tables, not ienumerables; going with an extension method--thanks Jacob): Final implementation: Extension method to handle the chunking: public static class HarenExtensions { public static IEnumerable<DataTable> Chunkify(this DataTable table, int chunkSize) { for (int i = 0; i < table.Rows.Count; i += chunkSize) { DataTable Chunk = table.Clone(); foreach (DataRow Row in table.Select().Skip(i).Take(chunkSize)) { Chunk.ImportRow(Row); } yield return Chunk; } } } Example consumer of that extension method, with sample output from an ad hoc test: class Program { static void Main(string[] args) { DataTable Table = GetTonsOfData(); foreach (DataTable Chunk in Table.Chunkify(100)) { Console.WriteLine("{0} - {1}", Chunk.Rows[0][0], Chunk.Rows[Chunk.Rows.Count - 1][0]); } Console.ReadLine(); } static DataTable GetTonsOfData() { DataTable Table = new DataTable(); Table.Columns.Add(new DataColumn()); for (int i = 0; i < 1000; i++) { DataRow Row = Table.NewRow(); Row[0] = i; Table.Rows.Add(Row); } return Table; } }

    Read the article

  • I Clean Solution'd my Winform App and now it's broken...

    - by Refracted Paladin
    I have a Winform App that uses a 3rd Party Library of Controls, DevExpress. I also created a bunch of Controls myself, extending those controls. Everything has been working fine when all of a sudden I opened VS today and on the Design Page all my extended controls were missing. I then tried rebuilding to no avail. Then I tried Cleaning and Rebuilding and made it even worse. Now I have tons(520) of erros stating things like -- Error 179 The name 'datBirthDate' does not exist in the current context D:\Documents\Visual Studio 2008\Projects\MatrixReloaded\MatrixReloaded\Controls\Member\ucGeneral.cs 339 17 MatrixReloaded also if I try to open a form or user control in Design Mode I first get this -- could not find type, "MyType" please make sure that the assembly that contains this type is referenced and then, if I click Ignore and Continue I get this for all Forms and Contros when I try and look at them in Design Mode -- Exception of type System.OutOfMemoryException was thrown Help!?!? When I googled I came across mostly references to VS 2003...I am on 2008 sp1.

    Read the article

  • Clean install of IIS 6 on Windows Server 2003 ignoring 'web.config'?

    - by Vario
    Hi, Any help with this would be really appreciated! As the title suggests, I'm running a brand new install of Windows Server 2003 and IIS 6 and I'm basically attempting to mirror a live web server onto a new internal development server, which runs the same setup. It's an ASP.NET site that relies heavily on URL rewriting (using Intelligencia). ASP.NET is set to run on v2.0.50727 on both servers. I've tried intentionally introduce syntax errors into the web.config and it just appears to be ignoring them completely, so given IIS 6 doesn't read the web.config, the rest of the site doesn't work at all (I get a 404 error, as a 'Default.aspx' doesn't exist since the web.config handles the default page rewriting). Having looked at the Application Mapping, '.config' files are set to use the default 'c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll' which exists. Is there anything else I may be missing? Thanks in advance.

    Read the article

  • What does a WinForm application need to be designed for usability, and be robust, clean, and profess

    - by msorens
    One of the principal problems impeding productivity in software implementation is the classic conundrum of “reinventing the wheel”. Of late I am a .NET developer and even the wonderful wizardry of .NET and Visual Studio covers only a portion of this challenging issue. Below I present my initial thoughts both on what is available and what should be available from .NET on a WinForm, focusing on good usability. That is, aspects of an application exposed to the user and making the user experience easier and/or better. (I do include a couple items not visible to the user because I feel strongly about them, such as diagnostics.) I invite you to contribute to these lists. LIST A: Components provided by .NET These are substantially complete components provided by .NET, i.e. those requiring at most trivial coding to use. “About” dialog -- add it with a couple clicks then customize. Persist settings across invocations -- .NET has the support; just use a few lines of code to glue them together. Migrate settings with a new version -- a powerful one, available with one line of code. Tooltips (and infotips) -- .NET includes just plain text tooltips; third-party libraries provide richer ones. Diagnostic support -- TraceSources, TraceListeners, and more are built-in. Internationalization -- support for tailoring your app to languages other than your own. LIST B: Components not provided by .NET These are not supplied at all by .NET or supplied only as rudimentary elements requiring substantial work to be realized. Splash screen -- a small window present during program startup with your logo, loading messages, etc. Tip of the day -- a mini-tutorial presented one bit at a time each time the user starts your app. Check for available updates -- facility to query a server to see if the user is running the latest version of your app, then provide a simple way to upgrade if a new version is found. Maximize to multiple monitors -- the canonical window allows you to maximize to a single monitor only; in my apps I allow maximizing across multiple monitors with a click. Taskbar notifier -- flash the taskbar when your backgrounded app has new info for the user. Options dialogs -- multi-page dialogs letting the user customize the app settings to his/her own preferences. Progress indicator -- for long running operations give the user feedback on how far there is left to go. Memory gauge -- an indicator (either absolute or percentage) of how much memory is used by your app. LIST C: Stylistic and/or tiny bits of functionality This list includes bits of functionality that are too tiny to merit being called a component, along with stylistic concerns (that admittedly do overlap with the Windows User Experience Interaction Guidelines). Design a form for resizing -- unless you are restricting your form to be a fixed size, use anchors and docking so that it does what is reasonable when enlarged or shrunk by the user. Set tab order on a form -- repeated tab presses by the user should advance from field to field in a logical order rather than the default order in which you added fields. Adjust controls to be aware of operating modes -- When starting a background operation with, for example, a “Go” button, disable that “Go” button until the operation completes. Provide access keys for all menu items (per UXGuide). Provide shortcut keys for commonly used menu items (per UXGuide). Set up some (global or important or common) shortcut keys without associating to menu items. Allow some menu items to be invoked with or without modifier keys (shift, control, alt) where the modifier key is useful to vary the operation slightly. Hook up Escape and Enter on child forms to do what is reasonable. Decorate any library classes with documentation-comments and attributes -- this allows Visual Studio to leverage them for Intellisense and property descriptions. Spell check your code! What else would you include?

    Read the article

  • What's a clean way to have the server return a JavaScript function which would then be invoked?

    - by Khnle
    My application is architected as a host of plug-ins that have not yet been written. There's a long reason for this, but with each new year, the business logic will be different and we don't know what it will be like (Think of TurboTax if that helps). The plug-ins consist of both server and client components. The server components deals with business logic and persisting the data into database tables which will be created at a later time as well. The JavaScript manipulates the DOM for the browsers to render afterward. Each plugin lives in a separate assembly, so that they won't disturb the main application, i.e., we don't want to recompile the main application. Long story short, I am looking for a way to return JavaScript functions to the client from an Ajax get request, and execute these JavaScript functions (which are just returned). Invoking a function in Javascript is easy. The hard part is how to organize or structure so that I won't have to deal with maintenance problem. So concat using StringBuilder to end up with JavaScript code as a result of calling toString() from the string builder object is out of the question. I want to have no difference between writing JavaScript codes normally and writing Javascript codes for this dynamic purpose. An alternative is to manipulate the DOM on the server side, but I doubt that it would be as elegantly as using jQuery on the client side. I am open for a C# library that supports chainable calls like jQuery that also manipulates the DOM too. Do you have any idea or is it too much to ask or are you too confused? Edit1: The point is to avoid recompiling, hence the plug-ins architecture. In some other parts of the program, I already use the concept of dynamically loading Javascript files. That works fine. What I am looking here is somewhere in the middle of the program when an Ajax request is sent to the server. Edit 2: To illustrate my question: Normally, you would see the following code. An Ajax request is sent to the server, a JSON result is returned to the client which then uses jQuery to manipulate the DOM (creating tag and adding to the container in this case). $.ajax({ type: 'get', url: someUrl, data: {'': ''}, success: function(data) { var ul = $('<ul>').appendTo(container); var decoded = $.parseJSON(data); $.each(decoded, function(i, e) { var li = $('<li>').text(e.FIELD1 + ',' + e.FIELD2 + ',' + e.FIELD3); ul.append(li); }); } }); The above is extremely simple. But next year, what the server returns is totally different and how the data to be rendered would also be different. In a way, this is what I want: var container = $('#some-existing-element-on-the-page'); $.ajax({ type: 'get', url: someUrl, data: {'': ''}, success: function(data) { var decoded = $.parseJSON(data); var fx = decoded.fx; var data = decode.data; //fx is the dynamic function that create the DOM from the data and append to the existing container fx(container, data); } }); I don't need to know, at this time what data would be like, but in the future I will, and I can then write fx accordingly.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >