Search Results

Search found 7338 results on 294 pages for 'useful'.

Page 204/294 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • The Minimalist Approach to Content Governance - Manage Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. Most people would probably agree that creating content is the enjoyable part of the content life cycle. Management, on the other hand, is generally not. This is why we thankfully have an opportunity to leverage meta data, security and other settings that have been applied or inherited in the prior parts of our governance process. In the interests of keeping this process pragmatic, there is little day to day activity that needs to happen here. Most of the activity that happens post creation will occur in the final "Retire" phase in which content may be archived or removed. The Manage Phase will focus on updating content and the meta data associated with it - specifically around ownership. Often times the largest issues with content ownership occur when a content creator leaves and organization or changes roles within an organization.   1. Update Content Ownership (as needed and on a quarterly basis) Why - Without updating content to reflect ownership changes it will be impossible to continue to meaningfully govern the content. How - Run reports against the meta data (creator and department to which the creator belongs) on content items for a particular user that may have left the organization or changed job roles. If the content is without and owner connect with the department marked as responsible. The content's ownership should be reassigned or if the content is no longer needed by that department for some reason it can be archived and or deleted. With a minimal investment it is possible to create reports that use an LDAP or Active Directory system to contrast all noted content owners versus the users that exist in the directory. The delta will indicate which content will need new ownership assigned. Impact - This implicitly keeps your repository and search collection clean. A very large percent of content that ends up no longer being useful to an organization falls into this category. This management phase (automated if possible) should be completed every quarter or as needed. The impact of actually following through with this phase is substantial and will provide end users with a better experience when browsing and searching for content.

    Read the article

  • Is the Observer pattern adequate for this kind of scenario?

    - by Omega
    I'm creating a simple game development framework with Ruby. There is a node system. A node is a game entity, and it has position. It can have children nodes (and one parent node). Children are always drawn relatively to their parent. Nodes have a @position field. Anyone can modify it. When such position is modified, the node must update its children accordingly to properly draw them relatively to it. @position contains a Point instance (a class with x and y properties, plus some other useful methods). I need to know when a node's @position's state changes, so I can tell the node to update its children. This is easy if the programmer does something like this: @node.position = Point.new(300,300) Because it is equivalent to calling this: # Code in the Node class def position=(newValue) @position = newValue update_my_children # <--- I know that the position changed end But, I'm lost when this happens: @node.position.x = 300 The only one that knows that the position changed is the Point instance stored in the @position property of the node. But I need the node to be notified! It was at this point that I considered the Observer pattern. Basically, Point is now observable. When a node's position property is given a new Point instance (through the assignment operator), it will stop observing the previous Point it had (if any), and start observing the new one. When a Point instance gets a state change, all observers (the node owning it) will be notified, so now my node can update its children when the position changes. A problem is when this happens: @someNode.position = @anotherNode.position This means that two nodes are observing the same point. If I change one of the node's position, the other would change as well. To fix this, when a position is assigned, I plan to create a new Point instance, copy the passed argument's x and y, and store my newly created point instead of storing the passed one. Another problem I fear is this: somePoint = @node.position somePoint.x = 500 This would, technically, modify @node's position. I'm not sure if anyone would be expecting that behavior. I'm under the impression that people see Point as some kind of primitive rather than an actual object. Is this approach even reasonable? Reasons I'm feeling skeptical: I've heard that the Observer pattern should be used with, well, many observers. Technically, in this scenario there is only one observer at a time. When assigning a node's position as another's (@someNode.position = @anotherNode.position), where I create a whole new instance rather than storing the passed point, it feels hackish, or even inefficient.

    Read the article

  • Help yourself . if you like

    - by rachelp
    At Red Gate we enjoy talking to our customers. Really! If you've read recent blog posts by members of some of our customer-facing teams, you'll have spotted the pleasure they take in their work. In case you missed those posts, here they are: From our Finance team: Finance: Friends, not foes! From our reception desk: The Front line of Communication However, we recognise that sometimes our customers would like to be able to solve their problems or answer their questions without talking to us - they're in a hurry, it's outside office hours . or perhaps they just prefer not to pick up the phone and call.   Self-service customer care So we've begun a programme of work to enable more self-service; whether it's finding the answer to a "how do i.?" question or getting access to a record of what product licenses they own, we want to make it much easier for our customers to get hold of this information for themselves. If they want to.   Phase 1: make it easier to find information We decided to start by tackling findability. We've got loads of useful information on our website, but it's sometimes difficult to find, so we've been working on improving our site search. Step 1 has been to replace the search engine, clean up the search UI, and make it consistent across the site. We're nearly there! The idea is that if we improve the site search it will be easier - and much more pleasant - for people to find the information they need. The new search will go live some time in April, and then we'll be gathering feedback, looking at web analytics (more about this in an earlier article), and working out what improvements we still need to make. We'd love to hear what you think, so do give your feedback or drop us a line. Or pick up the phone and call, if you like.   What do you think? While I've got your attention, I'd love to hear what people think about self-service customer care. Do you like to call, email, live chat . or do you prefer to dig around and find out answers yourself? Who's getting it right: what self-service sites do you like? p.s. Watch this space for news of phase 2.

    Read the article

  • Visual Studio 2010/2012 Context Menus and a Keyboard

    - by SergeyPopov
    As a software developer, I spend a lot of time using Visual Studio. I have to say that I completely satisfied with Visual Studio generally. Nevertheless, sometimes Visual Studio starts annoying me. One issue which poisoned my existence for a long time is that context menu behavior in VS2010 is a little different than it was in VS2005/2008. Unfortunately, in VS2012 this behavior remains the same as in VS2010. So, what is the issue? Working with Visual Studio, I use the keyboard in most cases. I also use the Apps key on the keyboard to open context menus in the code editor. Moreover, long time ago I am got used to using some key sequences, and press the keys without even thinking. In VS2008, a mouse pointer position didn’t affect context menu navigation if I used the keyboard. Every time I opened a context menu I was sure that, for example, the "Apps, Down, Down, Enter, Up, Enter" key sequence always invoke "Organize Usings > Remove and Sort" function. But in VS2010, this behavior has been changed. If a mouse pointer is located over an opened context menu, the menu item under the mouse pointer becomes selected immediately! So, now the "Apps, Down, Down, Enter, Up, Enter" key sequence will not lead to expected results all the time. In some cases, the result may be a little scary. If you are using Visual SVN extension, this key sequence may invoke "Revert whole file" function. Of course, this is not a fatal problem because "Undo" function restores all the changes, but this behavior strongly annoys me. In Visual Studio 2012, context menu behavior is a little different than in VS2010, but a mouse pointer position still affects the keyboard navigation in the context menu, and this behavior is still annoying. I tried to find the way how to change this behavior, but I didn’t manage to find the answer quickly. Then I decided to go right though, so I wrote a small utility which fixes this issue. This utility watches for Apps key, and if the key is pressed in Visual Studio, the utility moves the mouse pointer to the top of the screen before opening the context menu. You can find binaries and the source code of this utility here: http://code.google.com/p/vs-ctx-menu-fix/downloads/list This utility works fine in Windows 7 and Windows 8 x64. I wrote the first version in January, 2011; now I just added Visual Studio 2012 support. I hope you will find this utility useful! :)

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • chkdsk "An unspecified error occurred (696e647863686b2e e19)"

    - by Ex Umbris
    System is Win7x64 Pro on Core i7-920, 12GB I'm experiencing some system flakiness and am trying to pin down the cause. SMART shows zero bad sectors, zero pending reallocations on all drives Memory tests show no problems. Chkdsk fails in various different ways: When run from a normal command line (no /f option) it gets to 63% and then hangs When run on boot (autocheck) it hangs immediately on starting. Actually, the countdown timer (Press any key to skip chkdsk) gets to 1 second and the system hangs. When run from the F8 "Repair System" option (the Win7 "recovery console"), with /f, it runs to about 63% (end of stage 2) and then fails as follows:   Volume label is OS. CHKDSK is verifying files (stage 1 of 3)... 5068288 file records processed. File verification completed. 308 large file records processed. 0 bad file records processed. 2 EA records processed. 77 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... 63 percent complete. (6078872 of 7562028 index entries processed) An unspecified error occurred (696e647863686b2e e19). Unable to obtain a handle to the event log. Googling and searching on Technet for the error code and "Unable to obtain a handle to the event log" both turn up nothing useful. Anybody have any info on what the problem is?

    Read the article

  • SQL Server express service is not starting

    - by Mahdi Ghiasi
    I've bought my first VPS yesterday, and I have installed Microsoft SQL Server 2012 Express on it. Then I have restarted my VPS. But SQL Server Service didn't start. I've tried to start it manually, but It can't start: What is the problem? How to solve it? P.S: This is my first server management, and I'm a newbie, if you need any further details about this, please leave a comment. I'll update the question. Update 1: This is some log details from Event viewer that I thought that they may be useful for this problem: FCB::Open failed: Could not open file e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\MSDBData.mdf for file number 1. OS error: 3(The system cannot find the path specified.). The resource database build version is 11.00.3000. This is an informational message only. No user action is required. FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\MSDBLog.ldf'. Diagnose and correct the operating system error, and retry the operation. Starting up database 'model'. FCB::Open failed: Could not open file e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\model.mdf for file number 1. OS error: 3(The system cannot find the path specified.). FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\modellog.ldf'. Diagnose and correct the operating system error, and retry the operation. I'm confused about these e:\s, my VPS has just one C:\ drive, So what is e:\ ?

    Read the article

  • What exactly is a X-YMailISG header?

    - by iainH
    Finally ... our emails are being seen by Yahoo! not as junk anymore. Hurray! However I notice that the Yahoo! receiving MTA adds in a X-YMailISG header. It's very large ... 2**10 bits? Now that I've invested too large a chunk of my waking life in crafting our email headers I'm curious to know what an X-YMailISG header is. Can anybody tell me? Does it pose any security / authenticity issues? There's very little intelligible from Google results. Background: After many days tweaking TXT records in our domain's DNS zone file for SPF and DKIM, I have at last succeeded in generating email from our Drupal site that Yahoo! no longer marks as X-YahooFilteredBulk and the excellent service [email protected] returns results that show the emails passing SPF, DKIM and Sender-ID checks and appearing to SpamAssassin as ham. Yahoo! even adds a Received-SPF: pass header. Useful links: http://www.goldfisch.at/knowwiki/howtos/dkim-filter http://old.openspf.org/wizard.html Strangely enough the SPF TXT record needed / allowed a blank key / name field in our registrar's DNS management panel whereas the DKIM record needed the {selector}._domainkey as the key /name of the DKIM strings.

    Read the article

  • Problems with cross forest authentication in SQL Reporting

    - by chunkyb2002
    We're currently running an SQL 2008 R2 Cluster with Reporting Services running, all for use with System Center Operations Manager 2007 R2 (RU3). Our users are on a different domains to the SCOM and SQL servers (we have two domains as we are in the process of a domain migration) We have no problems at all with users accessing reports via the SCOM Console or the Web interface if they are on the new domain which runs at 2008 R2 functional level. However users on the old domain (which runs at a 2003 functional level) cannot access reports on SCOM or via the web interface (http://sqlserver/reports) The error we get is: An error occurred when invoking the authorization extension. (rsAuthorizationExtensionError) For more information about this error navigate to the report server on the local server machine, or enable remote errors Taking the errors advise we logged on to the SQL server as a user on the old domain (which works fine!) and then try to authenticate with the reporting via the web interface which produces this most useful of errors: An error occurred when invoking the authorization extension. (rsAuthorizationExtensionError) The creator of this fault did not specify a Reason. Things we've tried: Recreating the trust between domains Ensuring the SQL Reporting service account was a member of Windows Authorization Access Group on the 2003 domain Added users on the 2003 domain explicitly to the Reporting Users group on the SQL Server Has anyone come across this issue before perhaps in a different scenario? If so how was it resolved? Thanks in advance for any help.

    Read the article

  • High CPU usage by 'svchost.exe' and 'coreServiceShell.exe'

    - by kush.impetus
    I am having a laptop running on Windows 7 Ultimate 32-bit. Since past few days, my laptop is facing a serious problem. Whenever I connect to Internet, either svchost.exe or coreServiceShell.exe or both hog the CPU. The coreServiceShell.exe consumes a lot of RAM also. Going into the details, I found that high CPU usage of svchost.exe is caused by Network Location Awareness service. And the high CPU usage of coreServiceShell.exe is caused by Trend Micro Titanium Internet Security 2012. That kind'a makes me think that Trend Micro may be the root of the problem. After further testing, I found that if I use IE or Firefox to browse the Internet, immediately after connecting to Internet, things are normal. See and But if I use Google Chrome, the coreServiceShell.exe hogs both CPU and RAM. At this point, if I disconnect the Internet, the CPU and RAM usage by coreServiceShell.exe continues to be high till I close the Chrome. Also, when I close the Chrome, while Internet is connected, svchost.exe continues to hog CPU but coreServiceShell.exe leaves the race. That makes think that Chrome is the root of the problem, but again, tracing coreServiceShell.exe takes me back to Trend Micro Internet Security. Stopping the Protection by the Trend Micro Internet Security doesn't help either (I am not able to stop its services though). I have updated the Chrome, but no help. I just can't figure out who is the culprit. I can't do without the Google Chrome (of course, by not using it) because of its immensely useful and indispensable features both during browsing and development. Secondly, I can't uninstall the Trend Micro Internet security Suite since it still has few months before it expires and is proving me reliable protection. What could be the cause of the problem and what can I do to resolve this? Thanks in advance

    Read the article

  • Problems with repositories on CentOS 3.9

    - by rodnower
    Hello, I have CentOS 3.9 for i386. When I try to instal some thing with yum, i.e: yum install firefox or yum install firefox* or yum list firefox and so on, I get: +++++++++++++++++++ yum info firefox Gathering header information file(s) from server(s) Server: CentOS-3 - Addons Server: CentOS-3 - Base Server: CentOS-3 - Extras Server: CentOS-3 - Updates Server: Jason's Utter Ramblings Repo Finding updated packages Downloading needed headers Looking in Available Packages: Looking in Installed Packages: +++++++++++++++++++ Some time ago I had CentOS 5, and I had the similar problem (exept of firefox all other packages were not installed) and I spent very much time to find different repositories and so on. Now I have CentOS 3, and there is nothing I can install with yum. This is yum.conf content: +++++++++++++++++++ [main] cachedir=/var/cache/yum debuglevel=2 logfile=/var/log/yum.log pkgpolicy=newest distroverpkg=redhat-release installonlypkgs=kernel kernel-smp kernel-hugemem kernel-enterprise kernel-debug kernel-unsupported kernel-smp-unsupported kernel-hugemem-unsupported tolerant=1 exactarch=1 [utterramblings] name=Jason's Utter Ramblings Repo baseurl=http://www.jasonlitka.com/media/EL4/i386/ [base] name=CentOS-$releasever - Base baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ #released updates [update] name=CentOS-$releasever - Updates baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ #packages used/produced in the build but not released [addons] name=CentOS-$releasever - Addons baseurl=http://mirror.centos.org/centos/$releasever/addons/$basearch/ #additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ #[centosplus] #name=CentOS-$releasever - Plus #baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ #[testing] #name=CentOS-$releasever - Testing #baseurl=http://mirror.centos.org/centos/$releasever/testing/$basearch/ #[fasttrack] #name=CentOS-$releasever - Fasttrack #baseurl=http://mirror.centos.org/centos/$releasever/fasttrack/$basearch/ +++++++++++++++++++ The file is too long, so I littely edited it. So my question is: is there some "normal" one repository that have all basic thing like firefox and so that I will insert to this file and all will work fine? Thank you very much for ahead.

    Read the article

  • error reading keytab file krb5.keytab

    - by Banjer
    I've noticed these kerberos keytab error messages on both SLES 11.2 and CentOS 6.3: sshd[31442]: pam_krb5[31442]: error reading keytab 'FILE: / etc/ krb5. keytab' /etc/krb5.keytab does not exist on our hosts, and from what I understand of the keytab file, we don't need it. Per this kerberos keytab introduction: A keytab is a file containing pairs of Kerberos principals and encrypted keys (these are derived from the Kerberos password). You can use this file to log into Kerberos without being prompted for a password. The most common personal use of keytab files is to allow scripts to authenticate to Kerberos without human interaction, or store a password in a plaintext file. This sounds like something we do not need and is perhaps better security-wise to not have it. How can I keep this error from popping up in our system logs? Here is my krb5.conf if its useful: banjer@myhost:~> cat /etc/krb5.conf # This file managed by Puppet # [libdefaults] default_tkt_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_tgs_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC preferred_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_realm = FOO.EXAMPLE.COM dns_lookup_kdc = true clockskew = 300 [logging] default = SYSLOG:NOTICE:DAEMON kdc = FILE:/var/log/kdc.log kadmind = FILE:/var/log/kadmind.log [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false retain_after_close = false minimum_uid = 0 debug = false banner = "Enter your current" } Let me know if you need to see any other configs. Thanks. EDIT This message shows up in /var/log/secure whenever a non-root user logs in via SSH or the console. It seems to only occur with password-based authentication. If I do a key-based ssh to a server, I don't see the error. If I log in with root, I do not see the error. Our Linux servers authenticate against Active Directory, so its a hearty mix of PAM, samba, kerberos, and winbind that is used to authenticate a user.

    Read the article

  • Android SDK emulator freezes on a Mac running OS X 10.6 Snow Leopard

    - by Donald Burr
    I'm having trouble running the Android SDK on both of my Macs running OS X 10.6.2 Snow Leopard. This appears to be a 64 bit vs. 32 bit issue, as Snow Leopard now defaults to 64-bit everything, including the Java virtual machine. I found this webpage with instructions on how to get the Android tools to run in the 32-bit Java VM, and I am now able to run the Android GUI tool to download SDK files, create AVM's, etc. However, when I try the Hello World tutorial and get to the point where I run my application under the Android emulator, everything goes south. The emulator appears to start but it hangs (spinning beachball of death cursor) without displaying anything. (This only hangs the emulator; the rest of the system still works fine.) If I follow the exact same steps (minus the 32-bit java hack) in a Windows virtual machine, everything works fine. Googling didn't yield anything useful (except for the 32-bit java hack I spoke of earlier). This occurs on both my Mac Pro tower and 13" MacBook Pro. Does anyone have any suggestions?

    Read the article

  • Have I bricked my Sun V20z?

    - by David Mackintosh
    I have a small pile of Sun V20z computers. I was trying to update the SP and BIOS firmwares in order to bring them all up to the same standard -- mostly to get the updated (ie actually useful) SP functionality, and figured that I would just do the BIOS while I was at it. For three of the four computers, it worked perfectly. However after the BIOS update, the fourth system won't boot. I did this: batch05-mgmt $ sp get mounts Local Remote /mnt 10.16.0.8:/export/v20z batch05-mgmt $ platform set os state update-bios /mnt/sw_images/platform/firmware/bios/V1.35.3.2/bios.sp This command may take several minutes. Please be patient. Bios started Bios Flash Transmit Started Bios Flash Transmit Complete Bios Flash update Progress: 7 Bios Flash update Progress: 6 Bios Flash update Progress: 5 Bios Flash update Progress: 4 Bios Flash update Progress: 3 Bios Flash update Progress: 2 Bios Flash update Progress: 1 Bios Flash update complete batch05-mgmt $ platform set power state on This command may take several minutes. Please be patient. After an hour of waiting, it still won't start. The chassis powers on, but beyond the fans spinning up and the hardware POST of the drives, nothing appears to happen. So if I try to re-flash the BIOS (on the theory that maybe something went wrong): batch05-mgmt $ platform set os state update-bios /mnt/sw_images/platform/firmware/bios/V1.35.3.2/bios.sp This command may take several minutes. Please be patient. Bios started Error. The operation timed out. Have I bricked it?

    Read the article

  • Whys is System process listening on Port 80?

    - by Seth Spearman
    I am running Windows 7 RC1. I have multiple issues getting IIS to work on my system and today when I installed a new application and I tried to load it using http:\localhost\MyApplication I get absolutely no errors and I get no page load. Just a pretty, white blank page. I did some digging and I found something about some other process listening on port 80 so I did a scan using netstat -aon | findstr 0.0:80 and discovered that PID 4 was listening on that port. PID 4 does not show in task manager so I fired up Process Explorer and it showed me that PID 4 is the System process. (Multiple google searches seems to indicate that System always uses PID 4). Since then I am basically stuck. I have no idea why System needs port 80 and what to do about it. If you google the following strings you will find two helpful Experts-Exchange articles at the top of the search results and you can read them for some helpful information. (If I gave the direct URL to the pages then Experts-Exchange would ask you to pay...but when you click on the results from a google search you can scroll all of the way to the bottom to read the exchanges.) Here are the google searches... "System Process is listening on port 80 (Vista)" "SYSTEM Process is listening on Port 80 and Preventing IIS Default Website from Running" The last entry from the first result showed how to do a trace of http.sys at the following URL: http://blogs.msdn.com/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx Trace showed nothing useful. Any thoughts?

    Read the article

  • 32-bit Ubuntu or 64-bit w/Intel Atom D510 w/4GB RAM?

    - by T.J. Crowder
    (I've seen this question and some related ones, and perhaps this is a duplicate although part of my question is specific to the Atom D510.) I'm going to be installing Ubuntu on a new silent desktop as my latest (and hopefully last) attempt to switch from Windows to Linux for at least most everyday tasks. The new machine is entirely passvely cooled, but as a consequence, not astonishingly powerful — an Atom D510 (dual-core, 1.6GHz, HT) on Intel's D510MO board. That's fine, I won't use it for gaming, (much) video editing, etc. It's a 64-bit processor and I'm maxing the board out at 4GB of RAM (hey, that 1.6 CPU needs all the help it can get), which naturally raises the question of whether to install Ubuntu 64-bit or 32-bit (and if the latter, either live with the missing RAM, or do the PAE kernel dance). Although I've used Linux on servers for years, I'm very nearly a Linux desktop newbie and am not currently in the mood to fight driver wars and such. So if I'm setting myself up for failure with 64-bit, I'll live with the missing ~0.8GB or fiddle with PAE. But if 64-bit is entirely "ready," great, I'm there. So: Do most mainstream apps (now) play nicely with 64-bit Linux? I can't help but notice the "AMD" in the ISO image filename ubuntu-10.04-desktop-amd64.iso and I know AMD lead the way on this stuff — does Ubuntu 64-bit play nicely with Intel processors? Just generally, would you recommend one or the other? (And if anyone has any experience with Ubuntu specifically on the D510 [32-bit or 64-bit] which might lead me one way or t'other, that would be useful.) Thanks in advance.

    Read the article

  • Windows 7 wont boot from any boot loader except for 'Windows Boot Manager' after partition resize

    - by user2468327
    I have a triple boot system on a single SSD. OSX, Windows 7, and Ubuntu. I use Chimera (basically another version of Chameleon) as my boot loader. Usually I can boot all 3 without any issue, but after using GParted to make my Ubuntu partition 2 Gigs larger, Windows 7 throws me an error when trying to boot to it from either Chimera or Grub. The error is consistently: 0xc000000e "cant find \Boot\BCD" (slightly paraphrased). However, I can still get into Windows by selecting "Windows Boot Manager" from the boot options in my bios. I've already tried several known fixes for similar issues, including bootrec /rebuildbcd (and variations), and BootRec.exe/fixMBR + BootRec.exe/fixBoot. Ive also tried Chkdsk. At best this has made it so Windows 7 boots on it's own by default (making me have to reinstall Chimera and change back my boot settings in the bios). At worst this made it so Windows wont boot period. Now I'm back full circle where I started. A detail that might be useful is that bootrec /rebuildbcd says that the number of found Windows installations is 0. How do I get it back so I can boot Win7 through another boot loader so I don't have to manually select it in the bios? Preferably without a reinstall.

    Read the article

  • Ubuntu12.04 crashed while trying to install Ubuntu14.04, no longer have access

    - by FACE
    Posting from my laptop, as my Compaq desktop crashed. Not very computer saavy, and not sure what info I should provide...but here's a start: 9 year old desktop Compaq SR 1650NX, AMD Athlon 3500+ probably 1 gB ram (duh....) Was running 12.04....was attempting to upgrade to 14.04, not sure if I interrupted it. Pretty much stuck on (constantly redirected to) a page which says "GNU Grub version 1.99-2"; it offers several choices (as written): Ubuntu, with Linux 3.2.0-67-generic Ubuntu, with Linux 3.2.0-67-generic (recovery mode) Previous Linux versions Memory Test (memtest86+) Memory Test (memtest86+, serial console 115200) But none of the selections seem to be able to get me anywhere (i.e., I click yes, but can't seem to run any commands - not that I know anything useful); I escape from those pages by repeatedly hitting CTRL-ALT-DEL Any help would be appreciated; will provide additional info when requested. Thanx (hopefully), in advance. (If I don't respond immediately, it just means I had to attend to other concerns...will leave page open and check back in as often as possible). FACE

    Read the article

  • Amazon Ec2: Problem In Setting up FTP Server

    - by Muntasir
    after setting up My vsFtp Server ON Ec2 i am facing problem , my client is Filezilla and i am getting this error Response: 230 Login successful. Command: OPTS UTF8 ON Response: 200 Always in UTF8 mode. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 500 OOPS: invalid pasv_address Command: PORT 10,130,8,44,240,50 Response: 500 OOPS: priv_sock_get_cmd Error: Failed to retrieve directory listing Error: Connection closed by server this is the current setting in my vsftpd.conf #nopriv_user=ftpsecure #async_abor_enable=YES # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list GNU nano 2.0.6 File: /etc/vsftpd/vsftpd.conf # #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES pasv_enable=YES pasv_min_port=2345 pasv_max_port=2355 listen_port=1024 pasv_address=ec2-xxxxxxx.compute-1.amazonaws.com pasv_promiscuous=YES Note: i have already open those port in security group i mean listen port, min max if someone shows me how to fix this i will be very greatful thanks

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Upgrading Ubuntu to 9.04 breaks ATI video card driver, VESA and ati/radeon drivers

    - by Neil
    I upgraded my Ubuntu 8.10 to 9.04, and it not only broke the ATI proprietary fglrx driver, but also the ability to use the VESA or open-source ati or radeon drivers. I have an ATI RV610 which is an ATI Radeon HD 2400 XT. I have Linux Kernel 2.6.27-14-generic and 2.6.28-13-generic. With fglrx, vesa, ati and radeon, the Xserver hangs the machine as soon as it starts by invoking X or startx, which is seen by observing that caps lock doesn't work. There's nothing useful in /var/log/Xorg.0.log, no errors at all. This is with either kernel. When I download a new proprietary driver from ATI, I install it successfully on kernel 2.6.27, and it doesn't hang when X starts up, but it just shows a blank screen and does nothing. I also can't CTRL+ALT+Backspace out of X at this point. In all the years I've used ATI's Linux drivers, this has happened almost every time I've upgraded my kernel, but it's been fixable with much effort. This time I'm really stuck. Does anyone know how to fix these problems?

    Read the article

  • "Could not claim interface on camera: -6" when trying to connect usb camera (Kinect)

    - by rzetterberg
    I have installed the freenect library from openkinect.org. With that library there is a demo application which you can run from the terminal to test out the Kinect. However when I run this command I get the following output: richard@behemoth:~$ sudo freenect-glview Kinect camera test Number of devices found: 1 Could not claim interface on camera: -6 Could not open device This particular error is thrown by the library libusb by the function libusb_claim_interface and the error -6 corresponds to the LIBUSB_ERROR_BUSY. So my guess is that it has something to do with mounting the usb, rather than specifically the freenect library or the Kinect itself. So my question is how can I find out what resource is using this interface and how can I free it so that I can access it? Edit: What I have tried so far (just to be sure): Rebooted Plugged-out, plugged-in Tried different usb ports Restarted udev Additional information that might be useful: /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1c73f217-ac8d-451b-8390-7a680628a856 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=bb49bd29-07ec-45a0-bbab-46fb8362b06b none swap sw 0 0 sudo uname -r: Linux behemoth 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10"

    Read the article

  • Tasks not appearing in Mac Outlook 2011

    - by Tama
    My current workplace uses Macs and my old workplaces used Windows. In my old workplaces I heavily used Outlook's Task functionality to manage my workload. I understand that the Task functionality in Outlook 2011 for Mac is heavily limited so I was very pleased to find this useful "how-to" on making the most of Tasks. My problem is that my tasks don't appear in the Task folder, or anywhere else for that matter. Even if I search for a the title of a task I've recently found I still can't find them. After some Googling I found this forum thread that suggests it may be a problem with the Outlook database, which points to a Microsoft KB. So I went through all of the recommended steps on rebuilding/ adding a new identity using the "Microsoft Database Utility" - the theory being that if I create a new identity I can test the task creation using a "blank slate" identity. When I change the default identity to my newly created identity using the Microsoft Database Utility (have to restart the computer) Task creation still doesn't work. Any ideas appreciated, I really miss the task functionality in Outlook 2010 for Windows.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >