Search Results

Search found 8634 results on 346 pages for 'base'.

Page 52/346 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Force calling the derived class implementation within a generic function in C#?

    - by Adam Hardy
    Ok so I'm currently working with a set of classes that I don't have control over in some pretty generic functions using these objects. Instead of writing literally tens of functions that essentially do the same thing for each class I decided to use a generic function instead. Now the classes I'm dealing with are a little weird in that the derived classes share many of the same properties but the base class that they are derived from doesn't. One such property example is .Parent which exists on a huge number of derived classes but not on the base class and it is this property that I need to use. For ease of understanding I've created a small example as follows: class StandardBaseClass {} // These are simulating the SMO objects class StandardDerivedClass : StandardBaseClass { public object Parent { get; set; } } static class Extensions { public static object GetParent(this StandardDerivedClass sdc) { return sdc.Parent; } public static object GetParent(this StandardBaseClass sbc) { throw new NotImplementedException("StandardBaseClass does not contain a property Parent"); } // This is the Generic function I'm trying to write and need the Parent property. public static void DoSomething<T>(T foo) where T : StandardBaseClass { object Parent = ((T)foo).GetParent(); } } In the above example calling DoSomething() will throw the NotImplemented Exception in the base class's implementation of GetParent(), even though I'm forcing the cast to T which is a StandardDerivedClass. This is contrary to other casting behaviour where by downcasting will force the use of the base class's implementation. I see this behaviour as a bug. Has anyone else out there encountered this?

    Read the article

  • has_many :through default values

    - by David Lyod
    I have a need to design a system to track users memberships to groups with varying roles (currently three). class Group < ActiveRecord::Base has_many :memberships has_many :users, :through => :memberships end class Role < ActiveRecord::Base has_many :memberships has_many :users, :through => :memberships end class Membership < ActiveRecord::Base belongs_to :user belongs_to :role belongs_to :group end class User < ActiveRecord::Base has_many :memberships has_many :groups, :through => :memberships end Ideally what I want is to simply set @group.users << @user and have the membership have the correct role. I can use :conditions to select data that has been manually inserted as such : :conditions => ["memberships.grouprole_id= ? ", Grouprole.find_by_name('user')] But when creating the membership to the group the grouprole_id is not being set. Is there a way to do this as at present I have a somewhat repetitive piece of code for each user role in my Group model.

    Read the article

  • Rails active record association problem

    - by Harm de Wit
    Hello, I'm new at active record association in rails so i don't know how to solve the following problem: I have a tables called 'meetings' and 'users'. I have correctly associated these two together by making a table 'participants' and set the following association statements: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants and class Participant < ActiveRecord::Base belongs_to :meeting belongs_to :user and the last model class User < ActiveRecord::Base has_many :participants, :dependent => :destroy At this point all is going well and i can access the user values of attending participants of a specific meeting by calling @meeting.users in the normal meetingshow.html.erb view. Now i want to make connections between these participants. Therefore i made a model called 'connections' and created the columns of 'meeting_id', 'user_id' and 'connected_user_id'. So these connections are kinda like friendships within a certain meeting. My question is: How can i set the model associations so i can easily control these connections? I would like to see a solution where i could use @meeting.users.each do |user| user.connections.each do |c| <do something> end end I tried this by changing the model of meetings to this: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants has_many :connections, :dependent => :destroy has_many :participating_user_connections, :through => :connections, :source => :user Please, does anyone have a solution/tip how to solve this the rails way?

    Read the article

  • Why should I install Python packages into `~/.local`?

    - by Matthew Rankin
    Background I don't develop using OS X's system provided Python versions (on OS X 10.6 that's Python 2.5.4 and 2.6.1). I don't install anything in the site-packages directory for the OS provided versions of Python. (The only exception is Mercurial installed from a binary package, which installs two packages in the Python 2.6.1 site-packages directory.) I installed three versions of Python, all using the Mac OS X installer disk image: Python 2.6.6 Python 2.7 Python 3.1.2 I don't like polluting the site-packages directory for my Python installations. So I only install the following five base packages in the site-packages directory. For the actual method/commands used to install these, see SO Question 4324558. setuptools/ez_setup distribute pip virtualenv virtualenvwrapper All other packages are installed in virtualenvs. I am the only user of this MacBook. Questions Given the above background, why should I install the five base packages in ~/.local? Since I'm installing these base packages into the site-packages directories of Python distributions that I've installed, I'm isolated from the OS X's Python distributions. Using this method, should I be concerned about Glyph's comment that other things could potentially break (see his comment below)? Again, I'm only interested in where to install those five base packages. Related Questions/Info I'm asking because of Glyph's comment to my answer to SO question 4314376, which stated: NO. NEVER EVER do sudo python setup.py install whatever. Write a ~/.pydistutils.cfg that puts your pip installation into ~/.local or something. Especially files named ez_setup.py tend to suck down newer versions of things like setuptools and easy_install, which can potentially break other things on your operating system. Previously, I asked What's the proper way to install pip, virtualenv, and distribute for Python?. However, no one answered the "why" of using ~/.local.

    Read the article

  • Variable from block is put into a calculation but throws off wrong reading

    - by user2926620
    I am having troubles with trying to retrieve a double variable that is already established outside the block and called inside but I want to return the value of the same variable so that I can apply it to a calculation. the variable that I want returned is: double quarter = 0; but when I plug it into quarter in my first else/if statement, it plugs in 0 and not the value in my switch block. What can I do to retrieve the value? double quarter = 0; //Date entry will be calculated by how much KW user enters switch (input) { case "2/15/13": quarter = kwUsed * 0.10; break; case "4/15/13": quarter = kwUsed * 0.12; break; case "8/15/13": quarter = kwUsed * 0.15; break; case "11/15/13": quarter = kwUsed * 0.15; break; default: System.out.println("Invalid date"); } //Declaring variables for calculations double base = 0; double over = 0; double excess = 0; double math1 = 0; double math2 = 0; //KW Calculations if (kwUsed <= 350) { base = quarter; }else if (kwUsed <= 500) { math1 = ((kwUsed - 350) * quarter); base = ((kwUsed * quarter) - math1); over = ((math1 * 0.1) + math1); }else if (kwUsed > 500) { math2 = ((kwUsed - 350) * 0.1); base = ((kwUsed * 0.1) - math2); math2 = ((kwUsed -350) - 50); over = ((math2 * 0.1) + (15 * 0.1)); double math3 =((kwUsed - 500) * 0.1); excess = ((math3 * 0.25) + math3); } Edited to clarify question.

    Read the article

  • Why is this giving me an infinite loop?

    - by Chase Yuan
    I was going through a code used to calculate investments until it has doubled and I received an infinite loop that I can't seem to solve. Can anyone figure out why this is giving me an infinite loop? I've gone through myself but I can't seem to find the problem. The "period" referred is how many times per year the interest is compounded. double account = 0; //declares the variables to be used double base = 0; double interest = 0; double rate = 0; double result = 0; double times = 0; int years = 0; int j; System.out.println("This is a program that calculates interest."); Scanner kbReader = new Scanner(System.in); //enters in all data System.out.print("Enter account balance: "); account = kbReader.nextDouble(); System.out.print("Enter interest rate (as decimal): "); rate = kbReader.nextDouble(); System.out.println(" " + "Years to double" + " " + "Ending balance"); base = account; result = account; for (j=0; j<3; j++){ System.out.print("Enter period: "); times = kbReader.nextDouble(); while (account < base*2){ interest = account * rate / times; account = interest + base; years++; } account = (((int)(account * 100))/100.0); //results System.out.print(" " + i + " " + account + "\n"); account = result; } The code should ask for three "periods", or three different times the entered data is compounded per year (ex annually, monthly, daily etc.) Thanks a lot!

    Read the article

  • Mutate an object into an instance of one its subclasses

    - by Gohu
    Hi, Is it possible to mutate an object into an instance of a derived class of the initial's object class? Something like: class Base(): def __init__(self): self.a = 1 def mutate(self): self = Derived() class Derived(Base): def __init__(self): self.b = 2 But that doesn't work. >>> obj = Base() >>> obj.mutate() >>> obj.a 1 >>> obj.b AttributeError... If this isn't possible, how should I do otherwise? My problem is the following: My Base class is like a "summary", and the Derived class is the "whole thing". Of course getting the "whole thing" is a bit expensive so working on summaries as long as it is possible is the point of having these two classes. But you should be able to get it if you want, and then there's no point in having the summary anymore, so every reference to the summary should now be (or contain, at least) the whole thing. I guess I would have to create a class that can hold both, right? class Thing(): def __init__(self): self.summary = Summary() self.whole = None def get_whole_thing(self): self.whole = Whole()

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • Handling HTTP 404 Error in ASP.NET Web API

    - by imran_ku07
            Introduction:                     Building modern HTTP/RESTful/RPC services has become very easy with the new ASP.NET Web API framework. Using ASP.NET Web API framework, you can create HTTP services which can be accessed from browsers, machines, mobile devices and other clients. Developing HTTP services is now become more easy for ASP.NET MVC developer becasue ASP.NET Web API is now included in ASP.NET MVC. In addition to developing HTTP services, it is also important to return meaningful response to client if a resource(uri) not found(HTTP 404) for a reason(for example, mistyped resource uri). It is also important to make this response centralized so you can configure all of 'HTTP 404 Not Found' resource at one place. In this article, I will show you how to handle 'HTTP 404 Not Found' at one place.         Description:                     Let's say that you are developing a HTTP RESTful application using ASP.NET Web API framework. In this application you need to handle HTTP 404 errors in a centralized location. From ASP.NET Web API point of you, you need to handle these situations, No route matched. Route is matched but no {controller} has been found on route. No type with {controller} name has been found. No matching action method found in the selected controller due to no action method start with the request HTTP method verb or no action method with IActionHttpMethodProviderRoute implemented attribute found or no method with {action} name found or no method with the matching {action} name found.                                          Now, let create a ErrorController with Handle404 action method. This action method will be used in all of the above cases for sending HTTP 404 response message to the client.  public class ErrorController : ApiController { [HttpGet, HttpPost, HttpPut, HttpDelete, HttpHead, HttpOptions, AcceptVerbs("PATCH")] public HttpResponseMessage Handle404() { var responseMessage = new HttpResponseMessage(HttpStatusCode.NotFound); responseMessage.ReasonPhrase = "The requested resource is not found"; return responseMessage; } }                     You can easily change the above action method to send some other specific HTTP 404 error response. If a client of your HTTP service send a request to a resource(uri) and no route matched with this uri on server then you can route the request to the above Handle404 method using a custom route. Put this route at the very bottom of route configuration,  routes.MapHttpRoute( name: "Error404", routeTemplate: "{*url}", defaults: new { controller = "Error", action = "Handle404" } );                     Now you need handle the case when there is no {controller} in the matching route or when there is no type with {controller} name found. You can easily handle this case and route the request to the above Handle404 method using a custom IHttpControllerSelector. Here is the definition of a custom IHttpControllerSelector, public class HttpNotFoundAwareDefaultHttpControllerSelector : DefaultHttpControllerSelector { public HttpNotFoundAwareDefaultHttpControllerSelector(HttpConfiguration configuration) : base(configuration) { } public override HttpControllerDescriptor SelectController(HttpRequestMessage request) { HttpControllerDescriptor decriptor = null; try { decriptor = base.SelectController(request); } catch (HttpResponseException ex) { var code = ex.Response.StatusCode; if (code != HttpStatusCode.NotFound) throw; var routeValues = request.GetRouteData().Values; routeValues["controller"] = "Error"; routeValues["action"] = "Handle404"; decriptor = base.SelectController(request); } return decriptor; } }                     Next, it is also required to pass the request to the above Handle404 method if no matching action method found in the selected controller due to the reason discussed above. This situation can also be easily handled through a custom IHttpActionSelector. Here is the source of custom IHttpActionSelector,  public class HttpNotFoundAwareControllerActionSelector : ApiControllerActionSelector { public HttpNotFoundAwareControllerActionSelector() { } public override HttpActionDescriptor SelectAction(HttpControllerContext controllerContext) { HttpActionDescriptor decriptor = null; try { decriptor = base.SelectAction(controllerContext); } catch (HttpResponseException ex) { var code = ex.Response.StatusCode; if (code != HttpStatusCode.NotFound && code != HttpStatusCode.MethodNotAllowed) throw; var routeData = controllerContext.RouteData; routeData.Values["action"] = "Handle404"; IHttpController httpController = new ErrorController(); controllerContext.Controller = httpController; controllerContext.ControllerDescriptor = new HttpControllerDescriptor(controllerContext.Configuration, "Error", httpController.GetType()); decriptor = base.SelectAction(controllerContext); } return decriptor; } }                     Finally, we need to register the custom IHttpControllerSelector and IHttpActionSelector. Open global.asax.cs file and add these lines,  configuration.Services.Replace(typeof(IHttpControllerSelector), new HttpNotFoundAwareDefaultHttpControllerSelector(configuration)); configuration.Services.Replace(typeof(IHttpActionSelector), new HttpNotFoundAwareControllerActionSelector());         Summary:                       In addition to building an application for HTTP services, it is also important to send meaningful centralized information in response when something goes wrong, for example 'HTTP 404 Not Found' error.  In this article, I showed you how to handle 'HTTP 404 Not Found' error in a centralized location. Hopefully you will enjoy this article too.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Acer aspire 5735z wireless not working after upgrade to 11.10

    - by Jon
    I cant get my wifi card to work at all after upgrading to 11.10 Oneiric. I'm not sure where to start to fix this. Ive tried using the additional drivers tool but this shows that no additional drivers are needed. Before my upgrade I had a drivers working for the Rt2860 chipset. Any help on this would be much appreciated.... thanks Jon jon@ubuntu:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1d:72:ec:76:d5 inet addr:192.168.1.134 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21d:72ff:feec:76d5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7846 errors:0 dropped:0 overruns:0 frame:0 TX packets:7213 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8046624 (8.0 MB) TX bytes:1329442 (1.3 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:91 errors:0 dropped:0 overruns:0 frame:0 TX packets:91 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34497 (34.4 KB) TX bytes:34497 (34.4 KB) Ive included by dmesg output below [ 0.428818] NET: Registered protocol family 2 [ 0.429003] IP route cache hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.430562] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.436614] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.437409] TCP: Hash tables configured (established 524288 bind 65536) [ 0.437412] TCP reno registered [ 0.437431] UDP hash table entries: 2048 (order: 4, 65536 bytes) [ 0.437482] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) [ 0.437678] NET: Registered protocol family 1 [ 0.437705] pci 0000:00:02.0: Boot video device [ 0.437892] PCI: CLS 64 bytes, default 64 [ 0.437916] Simple Boot Flag at 0x57 set to 0x1 [ 0.438294] audit: initializing netlink socket (disabled) [ 0.438309] type=2000 audit(1319243447.432:1): initialized [ 0.440763] Freeing initrd memory: 13416k freed [ 0.468362] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.488192] VFS: Disk quotas dquot_6.5.2 [ 0.488254] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.488888] fuse init (API version 7.16) [ 0.488985] msgmni has been set to 5890 [ 0.489381] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.489413] io scheduler noop registered [ 0.489415] io scheduler deadline registered [ 0.489460] io scheduler cfq registered (default) [ 0.489583] pcieport 0000:00:1c.0: setting latency timer to 64 [ 0.489633] pcieport 0000:00:1c.0: irq 40 for MSI/MSI-X [ 0.489699] pcieport 0000:00:1c.1: setting latency timer to 64 [ 0.489741] pcieport 0000:00:1c.1: irq 41 for MSI/MSI-X [ 0.489800] pcieport 0000:00:1c.2: setting latency timer to 64 [ 0.489841] pcieport 0000:00:1c.2: irq 42 for MSI/MSI-X [ 0.489904] pcieport 0000:00:1c.3: setting latency timer to 64 [ 0.489944] pcieport 0000:00:1c.3: irq 43 for MSI/MSI-X [ 0.490006] pcieport 0000:00:1c.4: setting latency timer to 64 [ 0.490047] pcieport 0000:00:1c.4: irq 44 for MSI/MSI-X [ 0.490126] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.490149] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.490196] intel_idle: MWAIT substates: 0x1110 [ 0.490198] intel_idle: does not run on family 6 model 15 [ 0.491240] ACPI: Deprecated procfs I/F for AC is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 0.493473] ACPI: AC Adapter [ADP1] (on-line) [ 0.493590] input: Lid Switch as /devices/LNXSYSTM:00/device:00/PNP0C0D:00/input/input0 [ 0.496771] ACPI: Lid Switch [LID0] [ 0.496818] input: Sleep Button as /devices/LNXSYSTM:00/device:00/PNP0C0E:00/input/input1 [ 0.496823] ACPI: Sleep Button [SLPB] [ 0.496865] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 [ 0.496869] ACPI: Power Button [PWRF] [ 0.496900] ACPI: acpi_idle registered with cpuidle [ 0.498719] Monitor-Mwait will be used to enter C-1 state [ 0.498753] Monitor-Mwait will be used to enter C-2 state [ 0.498761] Marking TSC unstable due to TSC halts in idle [ 0.517627] thermal LNXTHERM:00: registered as thermal_zone0 [ 0.517630] ACPI: Thermal Zone [TZS0] (67 C) [ 0.524796] thermal LNXTHERM:01: registered as thermal_zone1 [ 0.524799] ACPI: Thermal Zone [TZS1] (67 C) [ 0.524823] ACPI: Deprecated procfs I/F for battery is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 0.524852] ERST: Table is not found! [ 0.524948] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 0.680991] ACPI: Battery Slot [BAT0] (battery present) [ 0.688567] Linux agpgart interface v0.103 [ 0.688672] agpgart-intel 0000:00:00.0: Intel GM45 Chipset [ 0.688865] agpgart-intel 0000:00:00.0: detected gtt size: 2097152K total, 262144K mappable [ 0.689786] agpgart-intel 0000:00:00.0: detected 65536K stolen memory [ 0.689912] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xd0000000 [ 0.691006] brd: module loaded [ 0.691510] loop: module loaded [ 0.691967] Fixed MDIO Bus: probed [ 0.691990] PPP generic driver version 2.4.2 [ 0.692065] tun: Universal TUN/TAP device driver, 1.6 [ 0.692067] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 0.692146] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.692181] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 20 (level, low) -> IRQ 20 [ 0.692206] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [ 0.692210] ehci_hcd 0000:00:1a.7: EHCI Host Controller [ 0.692255] ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 0.692289] ehci_hcd 0000:00:1a.7: debug port 1 [ 0.696181] ehci_hcd 0000:00:1a.7: cache line size of 64 is not supported [ 0.696202] ehci_hcd 0000:00:1a.7: irq 20, io mem 0xf8904800 [ 0.712014] ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 0.712131] hub 1-0:1.0: USB hub found [ 0.712136] hub 1-0:1.0: 6 ports detected [ 0.712230] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.712243] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 0.712247] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 0.712287] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 0.712315] ehci_hcd 0000:00:1d.7: debug port 1 [ 0.716201] ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported [ 0.716216] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xf8904c00 [ 0.732014] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 0.732130] hub 2-0:1.0: USB hub found [ 0.732135] hub 2-0:1.0: 6 ports detected [ 0.732209] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.732223] uhci_hcd: USB Universal Host Controller Interface driver [ 0.732254] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 0.732262] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 0.732265] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 0.732298] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 0.732325] uhci_hcd 0000:00:1a.0: irq 20, io base 0x00001820 [ 0.732441] hub 3-0:1.0: USB hub found [ 0.732445] hub 3-0:1.0: 2 ports detected [ 0.732508] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 20 (level, low) -> IRQ 20 [ 0.732514] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 0.732518] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 0.732553] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 0.732577] uhci_hcd 0000:00:1a.1: irq 20, io base 0x00001840 [ 0.732696] hub 4-0:1.0: USB hub found [ 0.732700] hub 4-0:1.0: 2 ports detected [ 0.732762] uhci_hcd 0000:00:1a.2: PCI INT C -> GSI 20 (level, low) -> IRQ 20 [ 0.732768] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 0.732772] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 0.732805] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 0.732829] uhci_hcd 0000:00:1a.2: irq 20, io base 0x00001860 [ 0.732942] hub 5-0:1.0: USB hub found [ 0.732946] hub 5-0:1.0: 2 ports detected [ 0.733007] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.733014] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.733017] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.733057] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 0.733082] uhci_hcd 0000:00:1d.0: irq 23, io base 0x00001880 [ 0.733202] hub 6-0:1.0: USB hub found [ 0.733206] hub 6-0:1.0: 2 ports detected [ 0.733265] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.733273] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.733276] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.733313] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 0.733351] uhci_hcd 0000:00:1d.1: irq 17, io base 0x000018a0 [ 0.733466] hub 7-0:1.0: USB hub found [ 0.733470] hub 7-0:1.0: 2 ports detected [ 0.733532] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.733539] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.733542] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.733578] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 0.733610] uhci_hcd 0000:00:1d.2: irq 18, io base 0x000018c0 [ 0.733730] hub 8-0:1.0: USB hub found [ 0.733736] hub 8-0:1.0: 2 ports detected [ 0.733843] i8042: PNP: PS/2 Controller [PNP0303:KBD0,PNP0f13:PS2M] at 0x60,0x64 irq 1,12 [ 0.751594] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.751605] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.751732] mousedev: PS/2 mouse device common for all mice [ 0.752670] rtc_cmos 00:08: RTC can wake from S4 [ 0.752770] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 [ 0.752796] rtc0: alarms up to one month, y3k, 242 bytes nvram, hpet irqs [ 0.752907] device-mapper: uevent: version 1.0.3 [ 0.752976] device-mapper: ioctl: 4.20.0-ioctl (2011-02-02) initialised: [email protected] [ 0.753028] cpuidle: using governor ladder [ 0.753093] cpuidle: using governor menu [ 0.753096] EFI Variables Facility v0.08 2004-May-17 [ 0.753361] TCP cubic registered [ 0.753482] NET: Registered protocol family 10 [ 0.753966] NET: Registered protocol family 17 [ 0.753992] Registering the dns_resolver key type [ 0.754113] PM: Hibernation image not present or could not be loaded. [ 0.754131] registered taskstats version 1 [ 0.771553] Magic number: 15:152:507 [ 0.771667] rtc_cmos 00:08: setting system clock to 2011-10-22 00:30:48 UTC (1319243448) [ 0.772238] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 0.772240] EDD information not available. [ 0.774165] Freeing unused kernel memory: 984k freed [ 0.774504] Write protecting the kernel read-only data: 10240k [ 0.774755] Freeing unused kernel memory: 20k freed [ 0.775093] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input3 [ 0.779727] Freeing unused kernel memory: 1400k freed [ 0.801946] udevd[84]: starting version 173 [ 0.880950] sky2: driver version 1.28 [ 0.881046] sky2 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.881096] sky2 0000:02:00.0: setting latency timer to 64 [ 0.881197] sky2 0000:02:00.0: Yukon-2 Extreme chip revision 2 [ 0.881871] sky2 0000:02:00.0: irq 45 for MSI/MSI-X [ 0.896273] sky2 0000:02:00.0: eth0: addr 00:1d:72:ec:76:d5 [ 0.910630] ahci 0000:00:1f.2: version 3.0 [ 0.910647] ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.910710] ahci 0000:00:1f.2: irq 46 for MSI/MSI-X [ 0.910775] ahci: SSS flag set, parallel bus scan disabled [ 0.910812] ahci 0000:00:1f.2: AHCI 0001.0200 32 slots 4 ports 3 Gbps 0x33 impl SATA mode [ 0.910816] ahci 0000:00:1f.2: flags: 64bit ncq sntf stag pm led clo pio slum part ccc ems sxs [ 0.910821] ahci 0000:00:1f.2: setting latency timer to 64 [ 0.941773] scsi0 : ahci [ 0.941954] scsi1 : ahci [ 0.942038] scsi2 : ahci [ 0.942118] scsi3 : ahci [ 0.942196] scsi4 : ahci [ 0.942268] scsi5 : ahci [ 0.942332] ata1: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904100 irq 46 [ 0.942336] ata2: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904180 irq 46 [ 0.942339] ata3: DUMMY [ 0.942340] ata4: DUMMY [ 0.942344] ata5: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904300 irq 46 [ 0.942347] ata6: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904380 irq 46 [ 1.028061] usb 1-5: new high speed USB device number 2 using ehci_hcd [ 1.181775] usbcore: registered new interface driver uas [ 1.260062] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.261126] ata1.00: ATA-8: Hitachi HTS543225L9A300, FBEOC40C, max UDMA/133 [ 1.261129] ata1.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 1.262360] ata1.00: configured for UDMA/133 [ 1.262518] scsi 0:0:0:0: Direct-Access ATA Hitachi HTS54322 FBEO PQ: 0 ANSI: 5 [ 1.262716] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.262762] sd 0:0:0:0: [sda] 488397168 512-byte logical blocks: (250 GB/232 GiB) [ 1.262824] sd 0:0:0:0: [sda] Write Protect is off [ 1.262827] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.262851] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.287277] sda: sda1 sda2 sda3 [ 1.287693] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.580059] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 1.581188] ata2.00: ATAPI: HL-DT-STDVDRAM GT10N, 1.00, max UDMA/100 [ 1.582663] ata2.00: configured for UDMA/100 [ 1.584162] scsi 1:0:0:0: CD-ROM HL-DT-ST DVDRAM GT10N 1.00 PQ: 0 ANSI: 5 [ 1.585821] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [ 1.585824] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 1.585953] sr 1:0:0:0: Attached scsi CD-ROM sr0 [ 1.586038] sr 1:0:0:0: Attached scsi generic sg1 type 5 [ 1.632061] usb 6-1: new low speed USB device number 2 using uhci_hcd [ 1.908056] ata5: SATA link down (SStatus 0 SControl 300) [ 2.228065] ata6: SATA link down (SStatus 0 SControl 300) [ 2.228955] Initializing USB Mass Storage driver... [ 2.229052] usbcore: registered new interface driver usb-storage [ 2.229054] USB Mass Storage support registered. [ 2.235827] scsi6 : usb-storage 1-5:1.0 [ 2.235987] usbcore: registered new interface driver ums-realtek [ 2.244451] input: B16_b_02 USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb6/6-1/6-1:1.0/input/input4 [ 2.244598] generic-usb 0003:046D:C025.0001: input,hidraw0: USB HID v1.10 Mouse [B16_b_02 USB-PS/2 Optical Mouse] on usb-0000:00:1d.0-1/input0 [ 2.244620] usbcore: registered new interface driver usbhid [ 2.244622] usbhid: USB HID core driver [ 3.091083] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null) [ 3.238275] scsi 6:0:0:0: Direct-Access Generic- Multi-Card 1.00 PQ: 0 ANSI: 0 CCS [ 3.348261] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 3.351897] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 47.138012] udevd[334]: starting version 173 [ 47.177678] lp: driver loaded but no devices found [ 47.197084] wmi: Mapper loaded [ 47.197526] acer_wmi: Acer Laptop ACPI-WMI Extras [ 47.210227] acer_wmi: Brightness must be controlled by generic video driver [ 47.566578] Disabling lock debugging due to kernel taint [ 47.584050] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 47.620666] type=1400 audit(1319239895.347:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=624 comm="apparmor_parser" [ 47.620934] type=1400 audit(1319239895.347:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=624 comm="apparmor_parser" [ 47.621108] type=1400 audit(1319239895.347:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=624 comm="apparmor_parser" [ 47.633056] [drm] Initialized drm 1.1.0 20060810 [ 47.722594] i915 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 47.722602] i915 0000:00:02.0: setting latency timer to 64 [ 47.807152] ndiswrapper (check_nt_hdr:141): kernel is 64-bit, but Windows driver is not 64-bit;bad magic: 010B [ 47.807159] ndiswrapper (load_sys_files:206): couldn't prepare driver 'rt2860' [ 47.807930] ndiswrapper (load_wrap_driver:108): couldn't load driver rt2860; check system log for messages from 'loadndisdriver' [ 47.856250] usbcore: registered new interface driver ndiswrapper [ 47.861772] i915 0000:00:02.0: irq 47 for MSI/MSI-X [ 47.861781] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 47.861783] [drm] Driver supports precise vblank timestamp query. [ 47.861842] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 47.980620] fixme: max PWM is zero. [ 48.286153] fbcon: inteldrmfb (fb0) is primary device [ 48.287033] Console: switching to colour frame buffer device 170x48 [ 48.287062] fb0: inteldrmfb frame buffer device [ 48.287064] drm: registered panic notifier [ 48.333883] acpi device:02: registered as cooling_device2 [ 48.334053] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:00/input/input5 [ 48.334128] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 48.334203] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 48.334644] HDA Intel 0000:00:1b.0: power state changed by ACPI to D0 [ 48.334652] HDA Intel 0000:00:1b.0: power state changed by ACPI to D0 [ 48.334673] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 48.334737] HDA Intel 0000:00:1b.0: irq 48 for MSI/MSI-X [ 48.334772] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 48.356107] Adding 261116k swap on /host/ubuntu/disks/swap.disk. Priority:-1 extents:1 across:261116k [ 48.380946] hda_codec: ALC268: BIOS auto-probing. [ 48.390242] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6 [ 48.390365] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [ 48.490870] EXT4-fs (loop0): re-mounted. Opts: errors=remount-ro,user_xattr [ 48.917990] ppdev: user-space parallel port driver [ 48.950729] type=1400 audit(1319239896.675:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=941 comm="apparmor_parser" [ 48.951114] type=1400 audit(1319239896.675:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=941 comm="apparmor_parser" [ 48.977706] Synaptics Touchpad, model: 1, fw: 7.2, id: 0x1c0b1, caps: 0xd04733/0xa44000/0xa0000 [ 49.048871] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input8 [ 49.078713] sky2 0000:02:00.0: eth0: enabling interface [ 49.079462] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 50.762266] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 50.762702] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 54.751478] type=1400 audit(1319239902.475:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm-guest-session-wrapper" pid=1039 comm="apparmor_parser" [ 54.755907] type=1400 audit(1319239902.479:8): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1040 comm="apparmor_parser" [ 54.756237] type=1400 audit(1319239902.483:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1040 comm="apparmor_parser" [ 54.756417] type=1400 audit(1319239902.483:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1040 comm="apparmor_parser" [ 54.764825] type=1400 audit(1319239902.491:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=1041 comm="apparmor_parser" [ 54.768365] type=1400 audit(1319239902.495:12): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince-previewer" pid=1041 comm="apparmor_parser" [ 54.770601] type=1400 audit(1319239902.495:13): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince-thumbnailer" pid=1041 comm="apparmor_parser" [ 54.770729] type=1400 audit(1319239902.495:14): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=1038 comm="apparmor_parser" [ 54.775181] type=1400 audit(1319239902.499:15): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1043 comm="apparmor_parser" [ 54.775533] type=1400 audit(1319239902.499:16): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1043 comm="apparmor_parser" [ 54.936691] init: failsafe main process (891) killed by TERM signal [ 54.944583] init: apport pre-start process (1096) terminated with status 1 [ 55.000373] init: apport post-stop process (1160) terminated with status 1 [ 55.005291] init: gdm main process (1159) killed by TERM signal [ 59.782579] EXT4-fs (loop0): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [ 60.992021] eth0: no IPv6 routers present [ 61.936072] device eth0 entered promiscuous mode [ 62.053949] Bluetooth: Core ver 2.16 [ 62.054005] NET: Registered protocol family 31 [ 62.054007] Bluetooth: HCI device and connection manager initialized [ 62.054010] Bluetooth: HCI socket layer initialized [ 62.054012] Bluetooth: L2CAP socket layer initialized [ 62.054993] Bluetooth: SCO socket layer initialized [ 62.058750] Bluetooth: RFCOMM TTY layer initialized [ 62.058758] Bluetooth: RFCOMM socket layer initialized [ 62.058760] Bluetooth: RFCOMM ver 1.11 [ 62.059428] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 62.059432] Bluetooth: BNEP filters: protocol multicast [ 62.460389] init: plymouth-stop pre-start process (1662) terminated with status 1 '

    Read the article

  • "PGError: no connection to the server" after idle

    - by user573484
    After my application sits idle overnight, when I try to access it in the morning I get a 500 Internal server error and the logs indicate "PGError: no connection to the server". After this first request if I refresh the page again everything is fine. I'm running Ubuntu 10.04 with apache 2, Passenger 3.0.2, Rails 2.3.8, and Postgres 8.4 on a remote server. Any ideas how to fix this? Here is the log: Processing ApplicationController#index (for 192.168.1.33 at 2011-01-06 17:28:14) [GET] Parameters: {"action"=>"index", "controller"=>"da"} ActiveRecord::StatementInvalid (PGError: no connection to the server : SELECT * FROM "users" WHERE ("users"."id" = 1) LIMIT 1): app/controllers/application_controller.rb:47:in `current_user' app/controllers/application_controller.rb:51:in `set_current_user' app/controllers/application_controller.rb:123:in `render_optional_error_file' passenger (3.0.2) lib/phusion_passenger/rack/request_handler.rb:96:in `process_request' passenger (3.0.2) lib/phusion_passenger/abstract_request_handler.rb:513:in `accept_and_process_next_request' passenger (3.0.2) lib/phusion_passenger/abstract_request_handler.rb:274:in `main_loop' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:321:in `start_request_handler' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `send' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `handle_spawn_application' passenger (3.0.2) lib/phusion_passenger/utils.rb:479:in `safe_fork' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:270:in `handle_spawn_application' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `__send__' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:180:in `start' passenger (3.0.2) lib/phusion_passenger/classic_rails/application_spawner.rb:149:in `start' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:219:in `spawn_rails_application' passenger (3.0.2) lib/phusion_passenger/abstract_server_collection.rb:132:in `lookup_or_add' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:214:in `spawn_rails_application' passenger (3.0.2) lib/phusion_passenger/abstract_server_collection.rb:82:in `synchronize' passenger (3.0.2) lib/phusion_passenger/abstract_server_collection.rb:79:in `synchronize' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:213:in `spawn_rails_application' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:132:in `spawn_application' passenger (3.0.2) lib/phusion_passenger/spawn_manager.rb:275:in `handle_spawn_application' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `__send__' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' passenger (3.0.2) lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' passenger (3.0.2) helper-scripts/passenger-spawn-server:99 /!\ FAILSAFE /!\ Thu Jan 06 17:28:14 -0700 2011 Status: 500 Internal Server Error PGError: no connection to the server : SELECT * FROM "users" WHERE ("users"."id" = 1) LIMIT 1 /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract_adapter.rb:221:in `log' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/postgresql_adapter.rb:520:in `execute' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1002:in `select_raw' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/postgresql_adapter.rb:989:in `select' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:81:in `cache_sql' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:664:in `find_by_sql' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1578:in `find_every' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1535:in `find_initial' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:616:in `find' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1910:in `find_by_id' /home/user/application/releases/20110106230903/app/controllers/application_controller.rb:47:in `current_user' /home/user/application/releases/20110106230903/app/controllers/application_controller.rb:51:in `set_current_user' /home/user/application/releases/20110106230903/app/controllers/application_controller.rb:123:in `render_optional_error_file' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:97:in `rescue_action_in_public' /home/user/application/releases/20110106230903/vendor/plugins/exception_notification/lib/exception_notification/notifiable.rb:48:in `rescue_action_in_public' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:154:in `rescue_action_without_handler' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:74:in `rescue_action' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/base.rb:532:in `send' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/base.rb:532:in `process_without_filters' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/filters.rb:606:in `process' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/rescue.rb:65:in `call_with_exception' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:90:in `dispatch' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:121:in `_call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:130 /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:29:in `call' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:29:in `call' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:9:in `cache' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/query_cache.rb:28:in `call' /var/lib/gems/1.8/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/string_coercion.rb:25:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/head.rb:9:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/params_parser.rb:15:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/session/cookie_store.rb:99:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/failsafe.rb:26:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/lock.rb:11:in `call' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/lock.rb:11:in `synchronize' /var/lib/gems/1.8/gems/rack-1.1.0/lib/rack/lock.rb:11:in `call' /var/lib/gems/1.8/gems/actionpack-2.3.8/lib/action_controller/dispatcher.rb:106:in `call' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/rack/request_handler.rb:96:in `process_request' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_request_handler.rb:513:in `accept_and_process_next_request' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_request_handler.rb:274:in `main_loop' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:321:in `start_request_handler' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `send' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:275:in `handle_spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/utils.rb:479:in `safe_fork' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:270:in `handle_spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `__send__' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:180:in `start' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/classic_rails/application_spawner.rb:149:in `start' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:219:in `spawn_rails_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb:132:in `lookup_or_add' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:214:in `spawn_rails_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb:82:in `synchronize' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server_collection.rb:79:in `synchronize' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:213:in `spawn_rails_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:132:in `spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/spawn_manager.rb:275:in `handle_spawn_application' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `__send__' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:357:in `server_main_loop' /var/lib/gems/1.8/gems/passenger-3.0.2/lib/phusion_passenger/abstract_server.rb:206:in `start_synchronously' /var/lib/gems/1.8/gems/passenger-3.0.2/helper-scripts/passenger-spawn-server:99

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • Installing VLC on CentOS 6.2

    - by suraj
    I'm using CentOS 6.2, and I tried to install VLC Player using yum, but it shows "No package vlc available". I tried below command: [root@localhost ~]# yum install vlc Loaded plugins: fastestmirror, refresh-packagekit Loading mirror speeds from cached hostfile * base: ftp.iitm.ac.in * extras: ftp.iitm.ac.in * updates: ftp.iitm.ac.in base | 3.7 kB 00:00 extras | 3.5 kB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 3.4 MB 01:17 virtualbox | 951 B 00:00 Setting up Install Process No package vlc available. Error: Nothing to do Is there any rpm package available?

    Read the article

  • 50um vs. 62.5um fiber compatability

    - by murisonc
    I've heard that there are compatibility problems when using 50um fiber with some fiber converters. After some research I'm thinking this is a legacy issue when using slower devices (100 Base FX) that used LEDs. I was told that the fiber converters are made for a certain size of fiber core and wont work with 50um fiber. Am I right in thinking this is just a corporate knowledge thing that is outdated when using 1000 Base SX converters (which should be using lasers instead of LEDs)?

    Read the article

  • Detecting upload success/failure in a scripted command-line SFTP session?

    - by Will Martin
    I am writing a BASH shell script to upload all the files in a directory to a remote server and then delete them. It'll run every few hours via a CRON job. My complete script is below. The basic problem is that the part that's supposed to figure out whether the file uploaded successfully or not doesn't work. The SFTP command's exit status is always "0" regardless of whether the upload actually succeeded or not. How can I figure out whether a file uploaded correctly or not so that I can know whether to delete it or let it be? #!/bin/bash # First, save the folder path containing the files. FILES=/home/bob/theses/* # Initialize a blank variable to hold messages. MESSAGES="" ERRORS="" # These are for notifications of file totals. COUNT=0 ERRORCOUNT=0 # Loop through the files. for f in $FILES do # Get the base filename BASE=`basename $f` # Build the SFTP command. Note space in folder name. CMD='cd "Destination Folder"\n' CMD="${CMD}put ${f}\nquit\n" # Execute it. echo -e $CMD | sftp -oIdentityFile /home/bob/.ssh/id_rsa [email protected] # On success, make a note, then delete the local copy of the file. if [ $? == "0" ]; then MESSAGES="${MESSAGES}\tNew file: ${BASE}\n" (( COUNT=$COUNT+1 )) # Next line commented out for ease of testing #rm $f fi # On failure, add an error message. if [ $? != "0" ]; then ERRORS="${ERRORS}\tFailed to upload file ${BASE}\n" (( ERRORCOUNT=$ERRORCOUNT+1 )) fi done SUBJECT="New Theses" BODY="There were ${COUNT} files and ${ERRORCOUNT} errors in the latest batch.\n\n" if [ "$MESSAGES" != "" ]; then BODY="${BODY}New files:\n\n${MESSAGES}\n\n" fi if [ "$ERRORS" != "" ]; then BODY="${BODY}Problem files:\n\n${ERRORS}" fi # Send a notification. echo -e $BODY | mail -s $SUBJECT [email protected] Due to some operational considerations that make my head hurt, I cannot use SCP. The remote server is using WinSSHD on windows, and does not have EXEC privileges, so any SCP commands fail with the message "Exec request failed on channel 0". The uploading therefore has to be done via the interactive SFTP command.

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

  • Squeeze and no sound

    - by user114
    hello i installed debian squeeze in my pc but i dont have any sound , my sound device is Ac97 and i installed these package in my system alsa-base alsa-oss alsa-tools alsa-utils alsamixergui gstreamer0.10-alsa libao-common libao4 libasound2 linux-sound-base oss-compat when i want to run alsamixer io got this message cannot open mixer: No such file or directory anyone know why i got this message ? and i cant hear any sound in my pc?

    Read the article

  • How to fix yum install not valid release error on Centos 5

    - by Tomaszs
    When I try to yum install anything I get: -bash-3.2# yum install strace Loading "fastestmirror" plugin Determining fastest mirrors * dag: apt.sw.be * lxlabsupdate: download.lxlabs.com * rpmforge: fr2.rpmfind.net * lxlabslxupdate: download.lxlabs.com YumRepo Warning: not using ftp, http[s], or file for repos, skipping - 5.2 is not a valid release or hasnt been released yet removing mirrorlist with no valid mirrors: //var/cache/yum/base/mirrorlist.txt Error: Cannot find a valid baseurl for repo: base How to fix it?

    Read the article

  • How does Azureus get my firewall to open a port (Debian Linux)?

    - by Norman Ramsey
    I downloaded Azureus (a bittorrent client) for Debian Linux, and I notice that Azureus got my firewall (a Verizon wireless base station) to open a TCP and UDP port forwarding for it, without my having to do anything. My base station is password protected, and I'm alarmed at the idea that any random application can open ports without my knowing about it. Can somebody explain to me what is going on and how it is possible that Azureus can create this port-forwarding rule without any authentication?

    Read the article

  • Package problems after upgrade

    - by Dan
    I installed a new upgrade on ubuntu which seemed to fail near the end. Now I'm being told that an error has occurred and to please run apt-get to see what's wrong. After some further tries with that I eventually gave up. It seems there's a left over latex package(?) somewhere and I can't seem to get rid of or fix it. Here's an example: blank@ubuntu:~$ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: tex-common : Breaks: texlive-common (< 2010) but 2009-15 is installed texlive-base : Depends: texlive-binaries (>= 2012.20120516) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-doc-base : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-extra-utils : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed texlive-font-utils : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-generic-recommended : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-base : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-base-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-extra : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-extra-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-recommended : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-recommended-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-luatex : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pictures : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pictures-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pstricks : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pstricks-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed zlib1g : Breaks: texlive-binaries (< 2009-12) but 2009-11ubuntu2 is installed zlib1g:i386 : Breaks: texlive-binaries (< 2009-12) but 2009-11ubuntu2 is installed E: Unmet dependencies. Try using -f. I've seen similar errors on the site here, but not close enough that i could get it fixed. Any help would be great, Thanks

    Read the article

  • Why I can't find mod_dav_svn 1.6 in rpmforge?

    - by Vincenzo
    This is what I'm doing: # yum --enablerepo=rpmforge list mod_dav_svn Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirrors.adams.net * base: mirror.sanctuaryhost.com * extras: mirror.sanctuaryhost.com * rpmforge: fr2.rpmfind.net * updates: mirror.steadfast.net Available Packages mod_dav_svn.x86_64 1.4.2-4.el5_3.1 base Why 1.4.2? Where is 1.6+?

    Read the article

  • Installing ArchLinux into Ubuntu 12.04 root

    - by Johnny
    Is it possible to install 2 linux distros into 1 root, so they share same uuid and guid, configs and packages + same user /home folder ? For example: I have Ubuntu and Windows 7 already in dual boot on my laptop. Could I install Arch's base, base-devel and kernel, so it won't conflict with Ubuntu on the same root folder? P.S I don't feel like repartitioning my drive again, 'cause there's very complicated hierarchy, which occupies the entire disk. =)

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >