Search Results

Search found 1882 results on 76 pages for 'recurring maintenance'.

Page 62/76 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Source Control Manager Backend

    - by Gabriel Parenza
    Hi Friends, What do you think is a better approach for Source Control Manager Backend. I am weighing File system vs Hosted Subversion service. Hosted Subversion-- (My company already has another group taking care of this) Advantages: * Zero maintenance on our end * Auto-backup and recovery * Reliability by auto-backup and file redundancy. * File history view in built, file merge, file diff On the other hand, while File system does not have the featured mentioned above but is much more simpler. Moreover, if files are hosted on Linux machine, which is backed up, it takes care of file system crash issues. Subversion will need working copies, which are going to be on this same Linux machine, and hence the need to not have an extra layer. Folks, I am looking for stronger reasons why I should take Subversion instead of keeping thing simple and going with File System. Let me know your opinions. Very thanks in advance, Gabriel. PS: I have explored few Commercial Source Manager, and have decide to go this route as it better suits our need.

    Read the article

  • LaTeX: bibliography per chapter.

    - by YuppieNetworking
    Hello all, I am helping a colleague with his PhD thesis and we need to present the bibliography at the end of each chapter. The question is: does anyone have a minimal working example for this case using latex+bibtex? The current document structure that we use is the following: main.tex chap1.tex chap2.tex ... chapn.tex biblio.bib Where main.tex contains packages, document declarations, macros and \includes for each chapter. biblio.bib is the only bibtex file (I think is easier to have all citations in one place). We have searched and tried with different latex packages, reading and following their documentation. Specifically, bibitems and chapterbib. bibitems successfully generates bu*.aux files, but when running bibtex for each one of them, an error occurs since there is no \bibdata element in the .aux file. chapterbib also generates a .aux file, but bibtex finishes with an error caused by using multiple \bibliography{file} in the .tex files (one per chapter). Some coworkers suggested using a separate bibtex file for each chapter, which could be a problem of maintenance in the future when citing the same publications in different chapters. We will like to continue having this document structure, if possible. So, if anyone could shed some light to this problem, we will appreciate it. Thanks.

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Solutions for working with multiple branches in ASP.Net

    - by Corey McKinnon
    At work, we are often working on multiple branches of our product at one time. For example, right now, we have a maintenance branch, a branch with code just going to QA, and a branch for a new major initiative, that won't be merged for some time now. Our web project is set up to use IIS, so every time we switch to a different branch, we have to go in to IIS Admin and change the path on the virtual directory, then reset IIS, and sometimes even restart Visual Studio, to avoid getting build errors. Is there any way to simplify this, other than not having our web project set up as a virtual directory? I'm not sure we want to make that change at this point. What do you do to make this easier, assuming you do this? Corey @RedWolves, virtual machines would definitely work, but I'm not sure it would be any simpler, especially for some of the other developers on my team, which is partly why I'm looking for more simplicity. @Dan, we're not able to change source control providers, unfortunately. @pix0r, that's something I'll try when I get back to work. Thanks for the suggestion. @Haacked, I'll have to give that a try too, but I think we have some issues with why that won't work (I can't remember exactly why right now; this application was originally written in .Net 1.1, pre-Cassini, and I can't remember if we tried it when we upgraded to 2.0 or not). Thanks all for the responses so far.

    Read the article

  • Delphi dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. So we can not just change the database and the connection code page to Unicode. We also have to modify all datamodules to use the new field type. The modified datamodule however will not be backwards compatible.

    Read the article

  • BITS client fails to specify HTTP Range header

    - by user256890
    Our system is designed to deploy to regions with unreliable and/or insufficient network connections. We build our own fault tolerating data replication services that uses BITS. Due to some security and maintenance requirements, we implemented our own ASP.NET file download service on the server side, instead of just letting IIS serving up the files. When BITS client makes an HTTP download request with the specified range of the file, our ASP.NET page pulls the demanded file segment into memory and serve that up as the HTTP response. That is the theory. ;) This theory fails in artificial lab scenarios but I would not let the system deploy in real life scenarios unless we can overcome that. Lab scenario: I have BITS client and the IIS on the same developer machine, so practically I have enormous network "bandwidth" and BITS is intelligent enough to detect that. As BITS client discovers the unlimited bandwidth, it gets more and more "greedy". At each HTTP request, BITS wants to grasp greater and greater file ranges (we are talking about downloading CD iso files, videos), demanding 20-40MB inside a single HTTP request, a size that I am not comfortable to pull into memory on the server side as one go. I can overcome that simply by giving less than demanded. It is OK. However, BITS gets really "confident" and "arrogant" demanding files WITHOUT specifying the download range, i.e., it wants the entire file in a single request, and this is where things go wrong. I do not know how to answer that response in the case of a 600MB file. If I just provide the starting 1MB range of the file, BITS client keeps sending HTTP requests for the same file without download range to continue, it hammers its point that it wants the entire file in one go. Since I am reluctant to provide the entire file, BITS gives up after several trials and reports error. Any thoughts?

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • How can I make a server log file available via my ASP.NET MVC website?

    - by Gary McGill
    I have an ASP.NET MVC website that works in tandem with a Windows Service that processes file uploads. For easy maintenance of the site, I'd like the log file for the Windows Service to be accessible (to me, only) via the website, so that I can hit http://myserver/logs/myservice to view the contents of the log file. How can I do that? At a guess, I could either have the service write its log file in a "Logs" folder at the top level of the site, or I could leave it where it is and set up a virtual directory to point to it. Which of these is better - or is there another, better way? Wherever the file is stored, I can see that there's going to be another problem. I tried out the first option (Logs folder in my website), but when I try to access the file via HTTP I get an error: The process cannot access the file 'foo' because it is being used by another process. Now, I know from experience that my service keeps the file locked for writing while it's running, but that I can still open the file in Notepad to view the current contents. (I'm surprised that IIS insists on write access, if that's what's happening). How can I get around that? Do I really have to write a handler to read the file and serve it to the browser myself? Or can I fix this with configuration or somesuch? PS. I'm using IIS7 if that helps.

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • castle monorail unit test rendertext

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new MyController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( ticket.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • Hopping from a C++ to a Perl/Unix job

    - by rocknroll
    Hi all, I have been a C++ / Linux Developer till now and I am adept in this stack. Of late I have been getting opportunities that require Perl, Unix (with knowledge of C++,shell scripting) expertise. Organizations are showing interest even though I don't have much scripting experience to boast off. The role is more in a Support, maintenance project involving SQL as well. Off late I am in a fix whether to forgo these offers or not. I don't know the dynamics of an IT organization and thus on one hand I fear that my C++ experience will be nullified and on the positive side I am getting to work on a new technology stack which will only add to my skill set. I am sure, most of you at some point of time have encountered such dilemmas and would have taken some decision. I want you to share your perspectives on such a scenario where a person is required to change his/her technology stack when changing his/her job. What are the merits and demerits in going with either of the choices? Also I know that C++ isn't going anywhere in the near future. What about perl? I have no clue as to what the future holds for perl developer? Whether there are enough opportunities for a perl developer? I am asking this question here because most of my fellow programmers face this career choice dilemma. Thanks.

    Read the article

  • Selectively suppress XML comments?

    - by Mike Post
    We deliver a number of assemblies to external customers, but not all of the public APIs are officially supported. For example, due to less than optimal design choices sometimes a type must be publicly exposed from an assembly for the rest of our code to work, but we don't want customers to use that type. One part of communicating the lack of support is not provide any intellisense in the form of XML comments. Is there a way to selectively suppress XML comments? I'm looking for something other than ignoring warning 1591 since it's a long term maintenance issue. Example: I have an assembly with public classes A and B. A is officially supported and should have XML documentation. B is not intended for external use and should not be documented. I could turn on XML documentation and then suppress warning 1591. But when I later add the officially supported class C, I want the compiler to tell me that I've screwed up and failed to add the XML documentation. This wouldn't occur if I had suppressed 1591 at the project level. I suppose I could #pragma across entire classes, but it seems like there should be a better way to do this.

    Read the article

  • How to handle missing files on MVC

    - by kaivalya
    What is your preferred way to handle hits to files that does not exist on your MVC app. I have couple of web apps runing with MVC and they are constantly getting hits for files folders etc. that does not exist in the app structure. Apps are throwing exception: The controller for path could not be found or it does not implement IController I am trying to find out the best way to handle this. I have 3 global routes on my global.asax file (see below) and at this point I am happy with that simple definition. I know if I added route definition for all controllers then I can add a definition to ignore the rest and handle these hits but if it will be possible to solve this problem without it, I do not want to add route definitions for each controller which I believe will flood the route definitions and also add a layer of maintenance which I don't like. //Aggregates 2nd level routes.MapRoute( "AggregateLevel2", "{controller}/{action}/{id}/{childid}/{childidlevel2}", new { controller = "Home", action = "Index", id = "", childid = "", childidlevel2 = "" } ); //Aggregates 1st level routes.MapRoute( "AggregateLevel1", "{controller}/{action}/{id}/{childid}", new { controller = "Home", action = "Index", id = "", childid = "" } ); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } );

    Read the article

  • Windows with Plesk Panel installs ActiveState Python 2.5.0 - any thoughts?

    - by B Mahoney
    I expect to run Pylons on a Windows Server 2003 and IIS 6 on a Virtual Private Server (VPS). Most work with the VPS is done through the Plesk 8.6 panel. The Plesk panel has a lot of maintenance advantages for us. However, this Plesk configuration installs ActiveState Python 2.5.0. The Parallels Plesk documents for 8.6 and version 9 insist that only this configuration should be installed. I'm not eager to settle for the baseline 2.5.0. but don't see any safe upgrade path. How has ActiveState Python 2.5.0 been for other users? Can you replace the Parallels\Plesk\Additional\Python with another distribution? I don't want to break Plesk, please. Previously, I followed these instructions, Serving a Pylons app with IIS - Pylons Cookbook Using the default web site IP address, I had Python 2.6.3 installing the ISAPI-WSGI dll in IIS so that I successfully ran Pylons in a virutalenv through IIS using the IP address, no domain name. I would be so happy if I could run this successful configuration for my domains while I must use Plesk. Tips and solutions appreciated.

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • Can I copy/clone a function in JavaScript?

    - by Craig Stuntz
    I'm using jQuery with the validators plugin. I would like to replace the "required" validator with one of my own. This is easy: jQuery.validator.addMethod("required", function(value, element, param) { return myRequired(value, element, param); }, jQuery.validator.messages.required); So far, so good. This works just fine. But what I really want to do is call my function in some cases, and the default validator for the rest. Unfortunately, this turns out to be recursive: jQuery.validator.addMethod("required", function(value, element, param) { // handle comboboxes with empty guids if (someTest(element)) { return myRequired(value, element, param); } return jQuery.validator.methods.required(value, element, param); }, jQuery.validator.messages.required); I looked at the source code for the validators, and the default implementation of "required" is defined as an anonymous method at jQuery.validator.messages.required. So there is no other (non-anonymous) reference to the function that I can use. Storing a reference to the function externally before calling addMethod and calling the default validator via that reference makes no difference. What I really need to do is to be able to copy the default required validator function by value instead of by reference. But after quite a bit of searching, I can't figure out how to do that. Is it possible? If it's impossible, then I can copy the source for the original function. But that creates a maintenance problem, and I would rather not do that unless there is no "better way."

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • VS2010 and CSS: What is the best strategy to individually position form controls

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • How can I extend this SQL query to find the k nearest neighbors?

    - by Smigs
    I have a database full of two-dimensional data - points on a map. Each record has a field of the geometry type. What I need to be able to do is pass a point to a stored procedure which returns the k nearest points (k would also be passed to the sproc, but that's easy). I've found a query at http://blogs.msdn.com/isaac/archive/2008/10/23/nearest-neighbors.aspx which gets the single nearest neighbour, but I can't figure how to extend it to find the k nearest neighbours. This is the current query - T is the table, g is the geometry field, @x is the point to search around, Numbers is a table with integers 1 to n: DECLARE @start FLOAT = 1000; WITH NearestPoints AS ( SELECT TOP(1) WITH TIES *, T.g.STDistance(@x) AS dist FROM Numbers JOIN T WITH(INDEX(spatial_index)) ON T.g.STDistance(@x) < @start*POWER(2,Numbers.n) ORDER BY n ) SELECT TOP(1) * FROM NearestPoints ORDER BY n, dist The inner query selects the nearest non-empty region and the outer query then selects the top result from that region; the outer query can easily be changed to (e.g.) SELECT TOP(20), but if the nearest region only contains one result, you're stuck with that. I figure I probably need to recursively search for the first region containing k records, but without using a table variable (which would cause maintenance problems as you have to create the table structure and it's liable to change - there're lots of fields), I can't see how.

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Troubleshooting failover cluster problem in W2K8 / SQL05

    - by paulland
    I have an active/passive W2K8 (64) cluster pair, running SQL05 Standard. Shared storage is on a HP EVA SAN (FC). I recently expanded the filesystem on the active node for a database, adding a drive designation. The shared storage drives are designated as F:, I:, J:, L: and X:, with SQL filesystems on the first 4 and X: used for a backup destination. Last night, as part of a validation process (the passive node had been offline for maintenance), I moved the SQL instance to the other cluster node. The database in question immediately moved to Suspect status. Review of the system logs showed that the database would not load because the file "K:\SQLDATA\whatever.ndf" could not be found. (Note that we do not have a K: drive designation.) A review of the J: storage drive showed zero contents -- nothing -- this is where "whatever.ndf" should have been. Hmm, I thought. Problem with the server. I'll just move SQL back to the other server and figure out what's wrong.. Still no database. Suspect. Uh-oh. "Whatever.ndf" had gone into the bit bucket. I finally decided to just restore from the backup (which had been taken immediately before the validation test), so nothing was lost but a few hours of sleep. The question: (1) Why did the passive node think the whatever.ndf files were supposed to go to drive "K:", when this drive didn't exist as a resource on the active node? (2) How can I get the cluster nodes "re-syncd" so that failover can be accomplished? I don't know that there wasn't a "K:" drive as a cluster resource at some time in the past, but I do know that this drive did not exist on the original cluster at the time of resource move.

    Read the article

  • Is it safe to reuse javax.xml.ws.Service objects

    - by Noel Ang
    I have JAX-WS style web service client that was auto-generated with the NetBeans IDE. The generated proxy factory (extends javax.xml.ws.Service) delegates proxy creation to the various Service.getPort methods. The application that I am maintaining instantiates the factory and obtains a proxy each time it calls the targetted service. Creating the new proxy factory instances repeatedly has been shown to be expensive, given that the WSDL documentation supplied to the factory constructor, an HTTP URI, is re-retrieved for each instantiation. We had success in improving the performance by caching the WSDL. But this has ugly maintenance and packaging implications for us. I would like to explore the suitability of caching the proxy factory itself. Is it safe, e.g., can two different client classes, executing on the same JVM and targetting the same web service, safely use the same factory to obtain distinct proxy objects (or a shared, reentrant one)? I've been unable to find guidance from either the JAX-WS specification nor the javax.xml.ws API documentation. The factory-proxy multiplicity is unclear to me. Having Service.getPort rather than Service.createPort does not inspire confidence.

    Read the article

  • Finding usage of jQuery UI in a big ugly codebase

    - by Daniel Magliola
    I've recently inherited the maintenance of a big, ugly codebase for a production website. Poke your eyes out ugly. And though it's big, it's mostly PHP code, it doesn't have much JS, besides a few "ajaxy" things in the UI. Our main current problem is that the site is just too heavy. Homepage weighs in at 1.6 Mb currently, so I'm trying to clean some stuff out. One of the main wasters is that every single page includes the jQuery UI library, but I don't think it's used at all. It's definitely not being used in the homepage and in most pages, so I want to only include the where necessary. I'm not really experienced with jQuery, i'm more of a Prototype guy, so I'm wondering. Is there anything I could search for that'd let me know where jQuery UI is being used? What i'm looking for is "common strings", component names, etc For example, if this was scriptaculous, i'd look for things like "Draggable", "Effect", etc. Any suggestions for jQuery UI? (Of course, if you can think of a more robust way of removing the tag from pages that don't use it without breaking everything, I'd love to hear about it) Thanks!! Daniel

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >