Search Results

Search found 7902 results on 317 pages for 'structure'.

Page 225/317 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • Finding "Stuff" In OUM

    - by Dave Burke
    One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?” Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind. Here are three quick tips I give people: 1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word. For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture This is fast and easy option to use, but it only searches the current OUM page 2. Use the PDF view of OUM Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack. This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel. 3. Use your operating systems file index to search for key words This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC. All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6 Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files. Happy searching!

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Do functional generics exist and what is the correct name for them if they do?

    - by voroninp
    Consider the following generic class: public class EntityChangeInfo<EntityType,TEntityKey> { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Here EntityType unambiguously defines TEntityKeyType. So it would be nice to have some kind of types' map: public class EntityChangeInfo<EntityType,TEntityKey> with map < [ EntityType : Person -> TEntityKeyType : int] [ EntityType : Car -> TEntityKeyType : CarIdType ]> { ChangeTypeEnum ChangeType {get;} TEntityKeyType EntityKey {get;} } Another one example is: public class Foo<TIn> with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} } The reasonable question: how can this be interpreted by the compiler? Well, for me it is just the shortcut for two structurally similar classes: public sealed class Foo<Person> { string Prop1 {get;set;} int Prop2 {get;set;} ... double PropN {get;set;} } public sealed class Foo<Car> { int Prop1 {get;set;} int Prop2 {get;set;} ... Price PropN {get;set;} } But besides this we could imaging some update of the Foo<>: public class Foo<TIn> with map < [TIn : Person -> TOut1 : string, TOut2 : int, ..., TOutN : double ] [TIn : Car -> TOut1 : int, TOut2 :int, ..., TOutN : Price ] > { TOut1 Prop1 {get;set;} TOut2 Prop2 {get;set;} ... TOutN PropN {get;set;} public override string ToString() { return string.Format("prop1={0}, prop2={1},...propN={N-1}, Prop1, Prop2,...,PropN); } } This all can seem quite superficial but the idea came when I was designing the messages for our system. The very first class. Many messages with the same structure should be discriminated by the EntityType. So the question is whether such construct exists in any programming language?

    Read the article

  • How to configure a longer version Number in artifactory

    - by claudine
    The version-numbers for our jars have to be longer them x.x.x. We would rather need x.x.x.x to integrate some old-fashioned self-made mechanism. This is, because we tag our software with x.x.x and as soon as we have a delivery to a customer one specific jar has to be build exactly at this point of time to fit to another backend, which communicates with our program. For that reason this one jar has the version 2.3.4.1, when generated and in next delivery of the same Version it is build and named 2.3.4.2. Now artifactory cannot handle this an doesn't save more than x.x.x.2 in some cases. So we thought of maybe edit the regular expression in the maven repository layout (see attached Screenshot) Because testing the path in the field below shows, that it cannot handle the version number. Of course for the rest of our jars still x.x.x has to work.. For Example here is the maven-metadata.xml <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.firm</groupId> <artifactId>someid</artifactId> <version>1.5.1</version> <versioning> <latest>1.5.1</latest> <release>1.5.1</release> <versions> <version>1.4.62</version> </versions> <lastUpdated>20120926073942</lastUpdated> </versioning> </metadata> The folder structure looks like: someid 1.4.62 1.4.62.1 1.4.62.2 1.4.62.3 If we deploy an new artifact version (1.4.62.1), the maven-metadata.xml contains the 1.4.62.1 version. But the artifactory overrides the version number (1.4.62.x) to (1.4.62) after an unspecified time. It seems that the artifactory only support major, minor and revision numbers, and deletes the buildnumber. Now we looking for a solution do disable this behavior. We use the JFrog Artifactory version 2.5.0 (rev. 13086). Any ideas, maybe? Thanks in andvance

    Read the article

  • Recommended learning path?

    - by stairmast0r
    First, my current standing: I know C++ at an.. advanced beginner level? I've gone through a book, I know the syntax well enough, I know a fair amount of standard library functions, and I've programmed some simple console stuff with it. I'd probably be able to program more with it if I knew how to structure a program, but I just can't seem to wrap my head around the whole concept of structuring something remotely complex. I've messed around with Java for a day or two, and the syntax was extremely easy to get the hang of, except that I didn't really know any functions. I'm plenty willing to learn, and to work hard to do so, but I don't really know where to go from here. Now, at the risk of sounding cliche, what I'd like to become is someone like the great three of id; Carmack, Romero, and Abrash. To be considered a genius. I believe anything can be learned, and nothing mentally limits anyone except lack of desire to learn. But I don't know how to learn this. They learned by doing, and making do with what resources they had. On the other hand, I have access to almost any books I want, access to the internet, and access to a more than capable computer and software. Should I learn more languages? Assembly? LISP? BASIC? Haskell? Should I dive straight into advanced topics like OpenGL? Or should I wait until I feel I've come closer to mastering the simpler things, like console programs, first? Should I follow tutorials? Should I follow books? Should I just dive into writing something and follow a reference manual as I go? What order should I do all this in? How should I do it? I want to completely master this; to be considered a genius. The most perfect career I can imagine is to start the next id. I have the drive to do it, I just don't know where to begin...

    Read the article

  • How odd this function is? It works in a test project, however, it goes wrong in my project?(Windows Socket) [closed]

    - by user67449
    int SockSend(DataPack &dataPack, SOCKET &sock, char *sockBuf){ int bytesLeft=0, bytesSend=0; int idx=0; bytesLeft=sizeof(dataPack); // ?DataPack?????sockBuf??? memcpy(sockBuf, &dataPack, sizeof(dataPack)); while(bytesLeft>0){ memset(sockBuf, 0, sizeof(sockBuf)); bytesSend=send(sock, &sockBuf[idx], bytesLeft, 0); cout<<"??send()??, bytesSend: "<<bytesSend<<endl; if(bytesSend==SOCKET_ERROR){ cout<<"Error at send()."<<endl; cout<<"Error # "<<WSAGetLastError()<<" happened."<<endl; return 1; } bytesLeft-=bytesSend; idx+=bytesSend; } cout<<"DataPack ???????"<<endl; return 0; } This is the function I defined, which is used to send a user_defined structure DataPack. My code in test project is as follows: char sendBuf[100000]; int res=SockSend(dataPack, sockConn, sendBuf); if(res==1){ cout<<"SockSend()???"<<endl; }else{ cout<<"SockSend()???"<<endl; } My code in my current project is: err=SockSend(dataPackSend, sockConn, sockBuf); if(err==1){ cout<<"SockSend()??"<<endl; exit(0); }else{ cout<<"??? "<<dataPackSend.packNum<<" ?DataPack(??)"<<endl; } Can you tell me where does this function go wrong? I will be appreciated for you answer.

    Read the article

  • Differences between a retail and wholesale website, and how difficult would it be to integrate them? [closed]

    - by kmy
    I was told to come here for some guidance, I know that my questions can be quite broad, but I just really want some kind of direction to go towards (links, articles, etc). So here it goes: For the past month I have been planning and started implementing a wholesale website, but plans got changed on me and I need to allow both retail and wholesale orders which will also depend on whether the account is a regular customer or also has a seller account. There are various types of products and it has been decided that I would use subcategories as tables to organize them and allow various specific item fields. So here are some of my specific questions: How different is the database structure for each item to be available for both wholesale and retail? Will it just be adding a quantity column and have two different pricing, or much more complicated? I am unaware if there's any price tables but would that be more difficult? They use Quickbooks POS software, how difficult/inefficient will it be to update inventory quantities if they have like 2000-4000 products? And what would be the best ways to extract the information and update the system? I know it export an excel spreadsheet so maybe a suggested php plugin for it? How difficult is this project in your professional opinion, and how big should a team usually be? (At the moment it is just me.) Projected time for planning, implementation, and quality assurance in accordance to team size? I am an entry level developer and I know that I do not have enough knowledge to direct myself on this website...What kind of web developer skills will I need to find to help me? (My company is willing to hire people to get this website done as fast as possible.) Also, what would be some great questionnaires to ask the product stakeholder about what he wants from the website? (He has made it clear he is neither computer or internet savvy...) Sorry for the amount of questions, I'm an entry level web developer and do not have a senior to look up to for guidance. I have knowledge in HTML, CSS, Javascript, jQuery, PHP, MySQL, phpMyAdmin. I have never used any frameworks like Zend, Magento, etc, and is doing this all from scratch. So far the website is built in an object oriented way, MVC architecture to the best of my abilities, but I have many doubts because I would really want to have this done right. If I have been unclear on some parts please tell me and I'll add more detail. I'm sure I'll have more additional questions later on, if anyone is open to that please tell me. Thanks in advance!

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • "file not found" error while commiting

    - by AntonAL
    I have a working copy, checked out from SVN repository. When i'm trying to commit, i get following error: svn: File not found: revision 57, path '/trunk/path/to/my/file/logo-mini.jpg' I've found this file in the repo and noticed, that it has only one revision - 58. I don't understand, why SVN complains about this file, when it is presented and why it points to revision 57 instead of 58 ? I've also renamed the grand-grand-grand-parent folder of this file. Possible, this is an issue ... Update Detailed error description, that i've got from Cornerstone app (Mac OS X): Description : Could not find the specified file. Suggestion : Check that the path you have specified is correct. Technical Information ===================== Error : V4FileNotFoundError Exception : ZSVNNoSuchEntryException Causal Information ================== Description : Commit failed (details follow): Status : 160013 File : subversion/libsvn_client/commit.c, 867 Description : File not found: revision 57, path '/trunk/assets/themes/base/article-content/images/logo-mini.jpg' Status : 160013 File : subversion/libsvn_fs_fs/tree.c, 663 So, i've renamed "/trunk/assets/themes directory" to "/trunk/assets/skins", while improving project structure. I've tried following: updating /trunk/assets/themes directory cleaning deleting from filesytem and checking out again reverting entire /trunk/assets/themes directory to the HEAD revision. Even this does't helps. Still getting the same error. I've got no results.

    Read the article

  • System State Backups using NTbackup fail with error 0x800423f4 (relating to volume shadow copy)

    - by Paul Zimmerman
    We have a Windows Server 2003 R2 running Service Pack 2. It is a domain controller (Global Catalog) and our main internal DNS server. We run a System State backup of the machine to back up Active Directory information and save the backup to a different server. This server has a single drive (C:), and we do have Shadow Copies enabled for the volume (which are completing successfully). The System State Backup is now failing with the following listed in the backup logs: Volume shadow copy creation: Attempt 1. "Event Log Writer" has reported an error 0x800423f4. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f4 Aborting Backup. The operation did not successfully complete. When doing a vssadmin list writers, we sometimes get the following reported for the Event Log Writer (other times it says that it is in the state of "[1] Stable" with "No error"): Writer name: 'Event Log Writer' Writer Id: {eee8c692-67ed-4250-8d86-390603070d00} Writer Instance Id: {c7194e96-868a-49e5-ba99-89b61977753c} State: [8] Failed Last error: Retryable error We have tried disabling the event log service via the registry, rebooting, deleting the event log files from the drive, then re-enabling the service via the registry and rebooting, but this didn't seem to solve the issue. We also get an error message when in the event viewer when trying to open the log for the "File Replication Service" of "Unable to complete the operation on 'File Replication Service'. The security descriptor structure is invalid." I have searched the error via Google and tried a number of different things, but nothing has seemed to help. Any suggestions on what we might try to get the Event Log Writer to behave would be greatly appreciated!

    Read the article

  • Installing ikiwiki on nginx - fastcgi/fcgi wrapper

    - by meder
    My ultimate goal is to setup ikiwiki, my current goal is to get a fcgi wrapper working for nginx, so I can move on to the next step... The ikiwiki page points out this page as an example for a fcgi wrapper: http://technotes.1000lines.net/?p=23 So far I've installed the ikiwiki and libfcgi-perl modules through aptitude: aptitude install libfcgi-perl aptitude install ikiwiki It installed those packages as well as some minimal dependency packages. So the next step following the guide at technotes, I grabbed http://technotes.1000lines.net/fastcgi-wrapper.pl but I'm not sure where to actually place this file... do I run it as a service? The script makes a socket file in /var/run/nginx but that directory does not exist.. do I manually create it? So in addition to the .pl file for the cgi wrapper, I need to also define a separate cgi file for parameters. If my conf looks like this... server { listen 80; server_name notes.domain.org; access_log /www/notes/public_html/notes.domain.org/log/access.log; error_log /www/notes/public_html/notes.domain.org/log/error.log; location / { root /www/notes/public_html/notes.domain.org/public/; index index.html; } } And I don't have a cgi-bin directory, where exactly should I create it within my structure, and regarding that I'd obviously have to update the below before I include it in my conf, but I'm just not exactly sure how this would work out. # /cgi-bin configuration location ~ ^/cgi-bin/.*\.cgi$ { gzip off; fastcgi_pass unix:/var/run/nginx/perl_cgi-dispatch.sock; [1]* fastcgi_param SCRIPT_FILENAME /www/blah.com$fastcgi_script_name; [2]* include fastcgi_params; [3]* } Also since the user is www-data and /var/run is root owned, what's the proper way of giving it access? Any tips appreciated.

    Read the article

  • SQL Server Express 2008 R2 Installation error at Windows 7

    - by Shai Sherman
    Hello, I created install script that will install SQL Server 2008 R2 on windows XP SP3, windows vista and windows 7. One of the command that i used in the installation is for silent installation of SQL Server 2008 R2. When i install it on windows XP everything works just fine but when i try to install it on Windows 7 i get an error. What am I doing wrong? Here is the command line that i use: "Setup.exe /ConfigurationFile=Mysetup.ini" Mysetup.ini file: -------------------------------------Start of ini file --------------------------------- ;SQL SERVER 2008 R2 Configuration File ;Version 1.0, 5 May 2010 ; [SQLSERVER2008] ; Specify the Instance ID for the SQL Server features you have specified. SQL Server directory structure, registry structure, and service names will reflect the instance ID of the SQL Server instance. INSTANCEID="MSSQLSERVER" ; Specifies a Setup work flow, like INSTALL, UNINSTALL, or UPGRADE. This is a required parameter. ACTION="Install" ; Specifies features to install, uninstall, or upgrade. The list of top-level features include SQL, AS, RS, IS, and Tools. The SQL feature will install the database engine, replication, and full-text. The Tools feature will install Management Tools, Books online, Business Intelligence Development Studio, and other shared components. FEATURES=SQLENGINE ; Displays the command line parameters usage HELP="False" ; Specifies that the detailed Setup log should be piped to the console. INDICATEPROGRESS="False" ; Setup will not display any user interface. QUIET="False" ; Setup will display progress only without any user interaction. QUIETSIMPLE="True" ; Specifies that Setup should install into WOW64. This command line argument is not supported on an IA64 or a 32-bit system. ;X86="False" ; Specifies the path to the installation media folder where setup.exe is located. ;MEDIASOURCE="z:\" ; Detailed help for command line argument ENU has not been defined yet. ENU="True" ; Parameter that controls the user interface behavior. Valid values are Normal for the full UI, and AutoAdvance for a simplied UI. ; UIMODE="Normal" ; Specify if errors can be reported to Microsoft to improve future SQL Server releases. Specify 1 or True to enable and 0 or False to disable this feature. ERRORREPORTING="False" ; Specify the root installation directory for native shared components. ;INSTALLSHAREDDIR="D:\Program Files\Microsoft SQL Server" ; Specify the root installation directory for the WOW64 shared components. ;INSTALLSHAREDWOWDIR="D:\Program Files (x86)\Microsoft SQL Server" ; Specify the installation directory. ;INSTANCEDIR="D:\Program Files\Microsoft SQL Server" ; Specify that SQL Server feature usage data can be collected and sent to Microsoft. Specify 1 or True to enable and 0 or False to disable this feature. SQMREPORTING="False" ; Specify a default or named instance. MSSQLSERVER is the default instance for non-Express editions and SQLExpress for Express editions. This parameter is required when installing the SQL Server Database Engine (SQL), Analysis Services (AS), or Reporting Services (RS). INSTANCENAME="SQLEXPRESS" SECURITYMODE=SQL SAPWD=SystemAdmin ; Agent account name AGTSVCACCOUNT="NT AUTHORITY\NETWORK SERVICE" ; Auto-start service after installation. AGTSVCSTARTUPTYPE="Manual" ; Startup type for Integration Services. ;ISSVCSTARTUPTYPE="Automatic" ; Account for Integration Services: Domain\User or system account. ;ISSVCACCOUNT="NT AUTHORITY\NetworkService" ; Controls the service startup type setting after the service has been created. ;ASSVCSTARTUPTYPE="Automatic" ; The collation to be used by Analysis Services. ;ASCOLLATION="Latin1_General_CI_AS" ; The location for the Analysis Services data files. ;ASDATADIR="Data" ; The location for the Analysis Services log files. ;ASLOGDIR="Log" ; The location for the Analysis Services backup files. ;ASBACKUPDIR="Backup" ; The location for the Analysis Services temporary files. ;ASTEMPDIR="Temp" ; The location for the Analysis Services configuration files. ;ASCONFIGDIR="Config" ; Specifies whether or not the MSOLAP provider is allowed to run in process. ;ASPROVIDERMSOLAP="1" ; A port number used to connect to the SharePoint Central Administration web application. ;FARMADMINPORT="0" ; Startup type for the SQL Server service. SQLSVCSTARTUPTYPE="Automatic" ; Level to enable FILESTREAM feature at (0, 1, 2 or 3). FILESTREAMLEVEL="0" ; Set to "1" to enable RANU for SQL Server Express. ENABLERANU="1" ; Specifies a Windows collation or an SQL collation to use for the Database Engine. SQLCOLLATION="SQL_Latin1_General_CP1_CI_AS" ; Account for SQL Server service: Domain\User or system account. SQLSVCACCOUNT="NT Authority\System" ; Default directory for the Database Engine user databases. ;SQLUSERDBDIR="K:\Microsoft SQL Server\MSSQL\Data" ; Default directory for the Database Engine user database logs. ;SQLUSERDBLOGDIR="L:\Microsoft SQL Server\MSSQL\Data\Logs" ; Directory for Database Engine TempDB files. ;SQLTEMPDBDIR="T:\Microsoft SQL Server\MSSQL\Data" ; Directory for the Database Engine TempDB log files. ;SQLTEMPDBLOGDIR="T:\Microsoft SQL Server\MSSQL\Data\Logs" ; Provision current user as a Database Engine system administrator for SQL Server 2008 R2 Express. ADDCURRENTUSERASSQLADMIN="True" ; Specify 0 to disable or 1 to enable the TCP/IP protocol. TCPENABLED="1" ; Specify 0 to disable or 1 to enable the Named Pipes protocol. NPENABLED="0" ; Startup type for Browser Service. BROWSERSVCSTARTUPTYPE="Automatic" ; Specifies how the startup mode of the report server NT service. When ; Manual - Service startup is manual mode (default). ; Automatic - Service startup is automatic mode. ; Disabled - Service is disabled ;RSSVCSTARTUPTYPE="Automatic" ; Specifies which mode report server is installed in. ; Default value: “FilesOnly” ;RSINSTALLMODE="FilesOnlyMode" ; Accept SQL Server 2008 R2 license terms IACCEPTSQLSERVERLICENSETERMS="TRUE" ;setup.exe /CONFIGURATIONFILE=Mysetup.ini /INDICATEPROGRESS --------------------------- End of ini file ------------------------------------- And i get this error: 2010-08-31 18:05:53 Slp: Error result: -2068119551 2010-08-31 18:05:53 Slp: Result facility code: 1211 2010-08-31 18:05:53 Slp: Result error code: 1 2010-08-31 18:05:53 Slp: Sco: Attempting to create base registry key HKEY_LOCAL_MACHINE, machine 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey 2010-08-31 18:05:53 Slp: Sco: Attempting to open registry subkey Software\Microsoft\PCHealth\ErrorReporting\DW\Installed 2010-08-31 18:05:53 Slp: Sco: Attempting to get registry value DW0200 2010-08-31 18:05:53 Slp: Submitted 1 of 1 failures to the Watson data repository What the meaning of this? What do i need to do to fix that problem? Here is the Summary file: Overall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068119551 Exit facility code: 1211 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2010-08-31 18:03:44 End time: 2010-08-31 18:05:51 Requested action: Install Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.50.1600.1%26EvtType%3d0x6121810A%400xC24842DB Machine Properties: Machine name: NVR Machine processor count: 2 OS version: Windows 7 OS service pack: OS region: United States OS language: English (United States) OS architecture: x86 Process architecture: 32 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Package properties: Description: SQL Server Database Services 2008 R2 ProductName: SQL Server 2008 R2 Type: RTM Version: 10 SPLevel: 0 Installation location: C:\Disk1\setupsql\x86\setup\ Installation edition: EXPRESS User Input Settings: ACTION: Install ADDCURRENTUSERASSQLADMIN: True AGTSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE AGTSVCPASSWORD: * AGTSVCSTARTUPTYPE: Disabled ASBACKUPDIR: Backup ASCOLLATION: Latin1_General_CI_AS ASCONFIGDIR: Config ASDATADIR: Data ASDOMAINGROUP: ASLOGDIR: Log ASPROVIDERMSOLAP: 1 ASSVCACCOUNT: ASSVCPASSWORD: * ASSVCSTARTUPTYPE: Automatic ASSYSADMINACCOUNTS: ASTEMPDIR: Temp BROWSERSVCSTARTUPTYPE: Automatic CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini CUSOURCE: ENABLERANU: True ENU: True ERRORREPORTING: False FARMACCOUNT: FARMADMINPORT: 0 FARMPASSWORD: * FEATURES: SQLENGINE FILESTREAMLEVEL: 0 FILESTREAMSHARENAME: FTSVCACCOUNT: FTSVCPASSWORD: * HELP: False IACCEPTSQLSERVERLICENSETERMS: True INDICATEPROGRESS: False INSTALLSHAREDDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSHAREDWOWDIR: C:\Program Files\Microsoft SQL Server\ INSTALLSQLDATADIR: INSTANCEDIR: C:\Program Files\Microsoft SQL Server\ INSTANCEID: MSSQLSERVER INSTANCENAME: SQLEXPRESS ISSVCACCOUNT: NT AUTHORITY\NetworkService ISSVCPASSWORD: * ISSVCSTARTUPTYPE: Automatic NPENABLED: 0 PASSPHRASE: * PCUSOURCE: PID: * QUIET: False QUIETSIMPLE: True ROLE: AllFeatures_WithDefaults RSINSTALLMODE: FilesOnlyMode RSSVCACCOUNT: NT AUTHORITY\NETWORK SERVICE RSSVCPASSWORD: * RSSVCSTARTUPTYPE: Automatic SAPWD: * SECURITYMODE: SQL SQLBACKUPDIR: SQLCOLLATION: SQL_Latin1_General_CP1_CI_AS SQLSVCACCOUNT: NT Authority\System SQLSVCPASSWORD: * SQLSVCSTARTUPTYPE: Automatic SQLSYSADMINACCOUNTS: SQLTEMPDBDIR: SQLTEMPDBLOGDIR: SQLUSERDBDIR: SQLUSERDBLOGDIR: SQMREPORTING: False TCPENABLED: 1 UIMODE: AutoAdvance X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\ConfigurationFile.ini Detailed results: Feature: Database Engine Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0x0A2FBD17@1211@1 Configuration error description: The process cannot access the file because it is being used by another process. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\Detail.txt Rules with failures: Global rules: Scenario specific rules: Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20100831_180236\SystemConfigurationCheck_Report.htm What should I do and why does this problem occur? Thanks , Shai.

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G EDIT1 I have succeeded to pin down the single windows API that causes the problem - DsGetDcName According to the windbg, the certutil -ping invokes it like so: PDOMAIN_CONTROLLER_INFO pdci; DWORD ret = ::DsGetDcName(NULL, NULL, NULL, NULL, DS_DIRECTORY_SERVICE_PREFERRED, &pdci); On my workstation it times out for 30 seconds and then returns error code 1355, which is ERROR_NO_SUCH_DOMAIN No domain controller is available for the specified domain or the domain does not exist. On another machine, which is accidentally a windows server 2003, it returns almost immediately with the correct domain controller name inside the returned DOMAIN_CONTROLLER_INFO structure. Now the question is what is missing on my workstation for that API to find the correct domain controller?

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • VSS Post Backup failures for Virtual Server 2005 R2 SP1 virtual machines

    - by califguy4christ
    We've been seeing strange errors with Volume Shadow Copy services on our Virtual Server 2005 R2 SP1 host. It appears to be failing on a strange mountpoint in the C:\WINDOWS\Temp\ folders, which I believe is used by VSS to mount a writeable image file. To summarize: The Microsoft Virtual Server 2005 Writer continually goes into a failed retryable state The Virtual Server log reports errors during the Post Backup phase VSS reports errors backing up a mount point of unknown origins The mount point causes NTFS and ftdisk errors The host is x86 Windows Server 2003 Standard, SP2. The virtual machine is the same. Both use basic disks. Here is the writer state: Writer name: 'Microsoft Virtual Server 2005 Writer' Writer Id: {76afb926-87ad-4a20-a50f-cdc69412ddfc} Writer Instance Id: {78df98e2-bf19-4804-890b-15865efef3bd} State: [11] Failed Last error: Retryable error From the Virtual Server log: Virtual Server - Vss Writer - Event ID: 1035: The VSS writer for Virtual Server failed during the PostBackup phase. The guest shadow copies did not get exposed on the host machine, after mounting all the virtual hard disks of the virtual machine VMACHINE. From the Application log: VSS - None - Event ID: 12290: Volume Shadow Copy Service warning: GetVolumeInformationW( \\?\Volume{fb84bae7-87f5-11dd-9832-001cc4961ca6}\,NULL,0, NULL,NULL,[0x00000000], , 260) == 0x0000045d. hr = 0x00000000. From the System log: Ntfs - Disk - Event ID: 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:\WINDOWS\Temp\ {fb84bae7-87f5-11dd-9832-001cc49.... My current theory is that VSS creates a mount point for an image file of the VHD, then the software panics for some reason, leaving everything in an inconsistent state. Removing the mount point doesn't resolve the problem. All of the other disks check out fine with CHKDSK. There's no exclusion option for VHDs or to turn off online backups. Has anyone seen this kind of thing before or point me in the right direction for getting more information about the mount point and it's origins? I haven't been able to trace what application is creating that mount point.

    Read the article

  • Add Network Printer drivers in Windows 7/Server 2008 R2?

    - by Matias Nino
    I'm running a 64 bit Windows 7 / Windows 2008 R2 workstation that I just installed. I need to add a printer that is shared on the network from a 32bit Windows 2000 print server. This is an HP LaserJet 5Si printer, the drivers for which HP tells me are automatically built into Windows 7/R2. However, whenever I connect to the printer or try to add it, I get the following screen: Upon clicking OK, I get this screen asking me to locate the driver: How can I possibly locate a driver that is SUPPOSED TO BE NATIVELY SUPPORTED on Windows 7/R2? The tough part is that this printer is one of many shared on a server and does not have a direct IP address. Even worse: I have no access to the print server so I cannot put the 64 bit drivers on there. Any ideas? UPDATE: HP doesn't make a Vista driver either. It claims it is natively supported by Vista and 7, which is true because I am able to create a local printer on a fake tcp/ip port and Windows lets me pick the proper driver. However, when adding from the network, Windows does not let me select a driver and demands an INF. I tried searching the entire sub-structure of the C:\Windows directory and could not find any INF files that contain HP information. The INF might be located somewhere in the Windows installation DVD, but all the files on the DVD are compressed and unrecognizable. UPDATE #2 I installed the proper printer driver as a local printer (with no printer attached) and it installed. However, this did not change the fact that it STILL asks me to provide drivers when connecting to the networked printer.

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

  • Is it possible to extract contents of a ThinApp container?

    - by rsk82
    Is there possible to extract such packages made by somebody else. Ok, so if there is no general way of extracting such archives, which can be a huge exe file or tiny exe starter with huge packed *.bin file with main files of an app that is to be run in portable way - is there a way to set an option in compilation *.ini file or other way to make such package able to be extracted. I remember I read somewhere that somebody created a tiny program (in was mentioned in vmware forums as far as I can remember, it was a crude thing coded for private use and I never managed to download it) to sit with main portable application and if such application has an open/save file dialog, it is possible to navigate to that program which virtually sits in the program files alongside main app from within the main app, and such program would scan all files that it can see and somehow is able to distiguish a real file from virtual one, and save all virtual files in a structure that is similar to the initial compilation folder from which portable app was created. I know that it is a very round-about way of doing things, but maybe the only one feasible. Nevertheless any news on this front ? Do antiviruses can somehow unpack these things ? Maybe they must buy a code or license for it from VmWare ? Edit: I found http://communities.vmware.com/thread/257433?tstart=600 and still trying to make sense out of this. Wrong, this was about moving old version of thinapp to win7.

    Read the article

  • restrict access to IIS virtual directory from root website

    - by Senthil
    I have two domains (domain1.com and domain2.com). Both of them use the same Windows hosting server with IIS7. One of the domains is being called the "primary domain" by my hosting provider (GoDaddy) and it always points to the root folder that I was given. For the other domain, I have created a virtual directory in IIS and pointed it there. The folder structure is like this - root/ --Default.aspx --SomeFile.aspx --domain2folder/ ----Default.aspx ----Domain2SomeFile.aspx So, if I type domain1.com, I see the regulakr Default.aspx. But if I type domain2.com, I am shown the contents of domain2folder as if it were a separate web application - I think that is what IIS virtual directory is meant for. Well and good. But the problem is, when I type http://domain1.com/domain2folder, I see the domain2's website! But I don't want that to be shown when I use the path like that from domain1. Only if they use domain2.com, user should be able to see those contents. How can I do that? Hope I am making sense. Thanks.

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • smartctl not actually running self tests?

    - by canzar
    I want to run the smartctl self tests to check the health of the drives in my RAID array (PERC 5/i). The array is on sda and comprises six drives. I can check the status using sudo smartctl /dev/sda -d megaraid,0 -a And I see that SMART is available and enabled on all the drives. I have tried to run self tests using sudo smartctl /dev/sda -d megaraid,0 -t short and sudo smartctl /dev/sda -d megaraid,0 -t long I have also tried it on all of the drives 0-5. No matter what I try, when I run: sudo smartctl /dev/sda -d megaraid,0 -l selftest I always get the same result, which seems to always report that I have never run a self test. /dev/sda [megaraid_disk_00] [SAT]: Device open changed type from 'megaraid' to 'sat' ===START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] From what I read, I should have no problem running the short and long self tests on the array while it is mounted. Does anyone else have experience running these tests on a PERC 5/i raid array who could lend some insight into what is causing the problem? (smartmontools release 5.40 dated 2009-12-09 at 21:00:32 UTC)

    Read the article

  • How do I keep a table up to date across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >