Search Results

Search found 3086 results on 124 pages for 'dependency'.

Page 110/124 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Why should I prefer OSGi Services over exported packages?

    - by Jens
    Hi, I am trying to get my head around OSGi Services. The main question I keep asking myself is: What's the benefit of using services instead of working with bundles and their exported packages? As far as I know it seems the concept of Late Binding has something to do with it. Bundle dependencies are wired together at bundle start, so they are pretty fixed I guess. But with services it seems to be almost the same. A bundle starts and registers services or binds to services. Of course services can come and go whenever they want and you have to keep track of these chances. But the core idea doesn't seem that different to me. Another aspect to this seems to be that services are more flexible. There could be many implementations for one specific Interface. On the other hand there can be a lot of different implementations for a specific exported package too. In another text I read that the disadvantage of using exported packages is that they make the application more fragile than services. The author wrote that if you remove one bundle from the dependency graph other dependencies would no longer be met, thus possibly causing a domino effect on the whole graph. But couldn't the same happen if a service would go offline? To me it looks like service dependencies are no better than bundle dependencies. So far I could not find a blog post, book or presentation that could clearly describe why services are better than just exposing functionality by exporting and importing packages. To sum my questions up: What are the key benefits of using OSGi Services that make them superior to exporting and importing packages?

    Read the article

  • OcaIDE doesn't see JoCaml tools

    - by Surikator
    I'm having a problem while using OcaIDE in ocamlbuild mode. I'm trying to compile my own JoCaml sources. According to the JoCaml manual (bottom of page), to use ocamlbuild with JoCaml, I just need to add the -use-jocaml argument to ocamlbuild. Indeed, if I go to the root of my project and write ocamlbuild -use-jocaml foo.native it generates my executable just fine. However, in OcaIDE I get /bin/sh: jocamldep: command not found In OcaIDE, the -use-jocaml flag is passed in the "Other Flags" box (in Project Properties). And that certainly is working, as the complaint is precisely that it doesn't find jocaml stuff. The puzzling thing is that jocaml is installed and can be accessed from any random terminal window. For example, running jocamldep -modules foo.ml > foo.ml.depends on my project does generate the desired dependency file. So, it would seem I would have to configure OcaIDE and tell it where JoCaml executables are or something. This is done for OCaml, for example. But there is no place to do that for JoCaml. And it's really strange that, if jocamldep/jocamlc/etc are all accessible from anywhere, OcaIDE wouldn't be able to pick them. Any ideas? (I am aware I can do an ocamlbuild plugin and pass the flag in a "myocamlbuild.ml" file. I'll probably use that a latter stage after I get familiar with ocamlbuild plugins. But here the question is about OcaIDE. EDIT: Actually, ocamlbuild plugins don't seem to be a solution as, although there is an option -use-jocaml in ocamlbuild to enforce jocaml use (and it works fine), the plugin system doesn't support it, i.e. jocaml is not in the list of options.)

    Read the article

  • Use `require()` with `node --eval`

    - by rentzsch
    When utilizing node.js's newish support for --eval, I get an error (ReferenceError: require is not defined) when I attempt to use require(). Here's an example of the failure: $ node --eval 'require("http");' undefined:1 ^ ReferenceError: require is not defined at eval at <anonymous> (node.js:762:36) at eval (native) at node.js:762:36 $ Here's a working example of using require() typed into the REPL: $ node > require("http"); { STATUS_CODES: { '100': 'Continue' , '101': 'Switching Protocols' , '102': 'Processing' , '200': 'OK' , '201': 'Created' , '202': 'Accepted' , '203': 'Non-Authoritative Information' , '204': 'No Content' , '205': 'Reset Content' , '206': 'Partial Content' , '207': 'Multi-Status' , '300': 'Multiple Choices' , '301': 'Moved Permanently' , '302': 'Moved Temporarily' , '303': 'See Other' , '304': 'Not Modified' , '305': 'Use Proxy' , '307': 'Temporary Redirect' , '400': 'Bad Request' , '401': 'Unauthorized' , '402': 'Payment Required' , '403': 'Forbidden' , '404': 'Not Found' , '405': 'Method Not Allowed' , '406': 'Not Acceptable' , '407': 'Proxy Authentication Required' , '408': 'Request Time-out' , '409': 'Conflict' , '410': 'Gone' , '411': 'Length Required' , '412': 'Precondition Failed' , '413': 'Request Entity Too Large' , '414': 'Request-URI Too Large' , '415': 'Unsupported Media Type' , '416': 'Requested Range Not Satisfiable' , '417': 'Expectation Failed' , '418': 'I\'m a teapot' , '422': 'Unprocessable Entity' , '423': 'Locked' , '424': 'Failed Dependency' , '425': 'Unordered Collection' , '426': 'Upgrade Required' , '500': 'Internal Server Error' , '501': 'Not Implemented' , '502': 'Bad Gateway' , '503': 'Service Unavailable' , '504': 'Gateway Time-out' , '505': 'HTTP Version not supported' , '506': 'Variant Also Negotiates' , '507': 'Insufficient Storage' , '509': 'Bandwidth Limit Exceeded' , '510': 'Not Extended' } , IncomingMessage: { [Function: IncomingMessage] super_: [Function: EventEmitter] } , OutgoingMessage: { [Function: OutgoingMessage] super_: [Function: EventEmitter] } , ServerResponse: { [Function: ServerResponse] super_: [Circular] } , ClientRequest: { [Function: ClientRequest] super_: [Circular] } , Server: { [Function: Server] super_: { [Function: Server] super_: [Function: EventEmitter] } } , createServer: [Function] , Client: { [Function: Client] super_: { [Function: Stream] super_: [Function: EventEmitter] } } , createClient: [Function] , cat: [Function] } > Is there a way to use require() with node's --eval? I'm on node 0.2.6 on Mac OS X 10.6.5.

    Read the article

  • Why is function's length information of other shared lib in ELF?

    - by minastaros
    Our project (C++, Linux, gcc, PowerPC) consists of several shared libraries. When releasing a new version of the package, only those libs should change whose source code was actually affected. With "change" I mean absolute binary identity (the checksum over the file is compared. Different checksum - different version according to the policy). (I should mention that the whole project is always built at once, no matter if any code has changed or not per library). Usually this can by achieved by hiding private parts of the included Header files and not changing the public ones. However, there was a case where just a delete was added to the destructor of a class TableManager (in the TableManager.cpp file!) of library libTableManager.so, and yet the binary/checksum of library libB.so (which uses class TableManager ) has changed. TableManager.h: class TableManager { public: TableManager(); ~TableManager(); private: int* myPtr; } TableManager.cpp: TableManager::~TableManager() { doSomeCleanup(); delete myPtr; // this delete has been added } By inspecting libB.so with readelf --all libB.so, looking at the .dynsym section, it turned out that the length of all functions, even the dynamically used ones from other libraries, are stored in libB! It looks like this (length is the 668 in the 3rd column): 527: 00000000 668 FUNC GLOBAL DEFAULT UND _ZN12TableManagerD1Ev So my questions are: Why is the length of a function actually stored in the client lib? Wouldn't a start address be sufficient? Can this be suppressed somehow when compiling/linking of libB.so (kind of "stripping")? We would really like to reduce this degree of dependency...

    Read the article

  • How to send java.util.logging to log4j?

    - by matt b
    I have an existing application which does all of its logging against log4j. We use a number of other libraries that either also use log4j, or log against Commons Logging, which ends up using log4j under the covers in our environment. One of our dependencies even logs against slf4j, which also works fine since it eventually delegates to log4j as well. Now, I'd like to add ehcache to this application for some caching needs. Previous versions of ehcache used commons-logging, which would have worked perfectly in this scenario, but as of version 1.6-beta1 they have removed the dependency on commons-logging and replaced it with java.util.logging instead. Not really being familiar with the built-in JDK logging available with java.util.logging, is there an easy way to have any log messages sent to JUL logged against log4j, so I can use my existing configuration and set up for any logging coming from ehcache? Looking at the javadocs for JUL, it looks like I could set up a bunch of environment variables to change which LogManager implementation is used, and perhaps use that to wrap log4j Loggers in the JUL Logger class. Is this the correct approach? Kind of ironic that a library's use of built-in JDK logging would cause such a headache when (most of) the rest of the world is using 3rd party libraries instead.

    Read the article

  • Satisfying indirect references at runtime.

    - by automatic
    I'm using C# and VS2010. I have a dll that I reference in my project (as a dll reference not a project reference). That dll (a.dll) references another dll that my project doesn't directly use, let's call it b.dll. None of these are in the GAC. My project compiles fine, but when I run it I get an exception that b.dll can't be found. It's not being copied to the bin directory when my project is compiled. What is the best way to get b.dll into the bin directory so that it can be found at run time. I've thought of four options. Use a post compile step to copy b.dll to the bin directory Add b.dll to my project (as a file) and specify copy to output directory if newer Add b.dll as a dll reference to my project. Use ILMerge to combine b.dll with a.dll I don't like 3 at all because it makes b.dll visible to my project, the other two seem like hacks. Am I missing other solutions? Which is the "right" way? Would a dependency injection framework be able to resolve and load b.dll?

    Read the article

  • Running Java Program linking to thirdpary library (java -jar) issue ( Multiple methods tried )

    - by bamachrn
    This issue is related to running a Java program (jar) dependent on thirdparty jar library even after setting classpath and trying so many other methods by reading articles in Internet. I want to use a thirdparty Pack1.jar (it is not a part of jvm) as dependency of my programme. I do not know where the Pack1.jar file could be in the deployment machine and I want the deployer to specify the path for the thirdparty libraries I have tried the following alternatives in vain Setting the java.class.path programatically String class_path = args[0]; System.setProperty("java.class.path",class_path); Here I am assuming that deployer would supply the classpath as first argument while running the program Setting the CLASSPATH env_var to locate the thirdparty directory While running, using the classpath option java -classpath /path/to/Pack1.jar -jar Pack2.jar I think this would not work because documentation says that classpath is ignored when program is run with "java -jar" Setting the java.ext.dirs programatically. Setting the java.library.path programatically. I do not want to specify the Class-Path in manifest because that takes only relative path and I do not know where the thirdparty library would be kept in deployment machine But I am unable to get the jar running. How can I fix this problem any help please.

    Read the article

  • Does RazorEngine require MVC3 to be installed?

    - by tkha007
    I am working on a web project that uses MVC2. I decided to try out RazorEngine to do some e-mail templating. This appeared to work fine when I was protyping using an MVC2 project so I assumed that RazorEngine will work fine for my e-mail templating solution. What I had forgotten at the time was that I actually had MVC3 installed on my local development machine. After deploying the project on a pre-test server I get the following error in the logs when the application attempts to do anything with RazorEngine: Could not load file or assembly 'System.Web.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. File name: 'System.Web.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' at RazorEngine.Compilation.DefaultCompilerServiceFactory.CreateCompilerService(Language language) at RazorEngine.Templating.TemplateService.CreateTemplateType(String razorTemplate, Type modelType) at RazorEngine.Templating.TemplateService.CreateTemplate[T](String razorTemplate, T model) at RazorEngine.Templating.TemplateService.Parse[T](String razorTemplate, T model) at RazorEngine.Razor.Parse[T](String razorTemplate, T model) at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2) at Persistence.Utility.RazorEngineHelper.Parse(String templateName, Object model) in The fact that it can't find 'System.Web.Razor' means that this DLL does not exist on the deployed server. The only difference I can think of between the deployment server and my local dev machine is that the deployment server does not have MVC3 installed but I may be mistaken because the deployment server is not something I normally control and as such I don't have a lot of information about it. It is meant to host this particular application so there have been previous deployments of this application to this server. This is the first time I'm making a deployment with RazorEngine as a dependency.

    Read the article

  • Should I re-use UI elements across view controllers?

    - by Endemic
    In the iPhone app I'm currently working on, I'd like two navigation controllers (I'll call them A and B) to have toolbars that are identical in appearance and function. The toolbar in question will look like this: [(button) (flexible-space) (label)] For posterity's sake, the label is actually a UIBarButtonItem with a custom view. My design requires that A always appear directly before B on the navigation stack, so B will never be loaded without A having been loaded. Given this layout, I started wondering, "Is it worth it to re-use A's toolbar items in B's toolbar?" As I see it, my options are: 1. Don't worry about re-use, create the toolbar items twice 2. Create the toolbar items in A and pass them to B in a custom initializer 3. Use some more obscure method that I haven't thought of to hold the toolbar constant when pushing a view controller As far as I can see, option 1 may violate DRY, but guarantees that there won't be any confusion on the off chance that (for example) the button may be required to perform two different (no matter how similar) functions for either view controller in future versions of the app. Were that to happen, options 2 or 3 would require the target-action of the button to change when B is loaded and unloaded. Even if the button were never required to perform different functions, I'm not sure what its proper target would be under option 2. All in all, it's not a huge problem, even if I have to go with option 1. I'm probably overthinking this anyway, trying to apply the dependency injection pattern where it's not appropriate. I just want to know the best practice should this situation arise in a more extreme form, like if a long chain of view controllers need to use identical (in appearance and function) UI elements.

    Read the article

  • In Maven 2, Is it possible to specify a mirror for everything, but allow for failover to direct repo

    - by Justin Searls
    I understand that part of the appeal of setting up a Maven mirror, such as the following: <mirror> <id>nexus</id> <name>Maven Repository</name> <mirrorOf>*</mirrorOf> <url>http://server:8081/nexus/content/groups/public</url> </mirror> ... is that the documentation states, "You can force Maven to use a single repository by having it mirror all repository requests." However, is this also an indication that by having a * mirror set up each workstation [b]must[/b] be forced to go through the mirror? I ask because I would like each workstation to failover and connect directly to whatever public repositories it knows about in the event that Nexus can't resolve a dependency or plugin. (In a perfect world, each developer has the access necessary to add additional proxy repositories as needed. However, sometimes that access isn't available; sometimes the Nexus server goes down; sometimes it suffers a Java heap error.) Is this "mirror but go ahead and connect directly to public repos" failover configuration possible in Maven 2? Will it be in Maven 3?

    Read the article

  • Using Moq to Validate Separate Invocations with Distinct Arguments

    - by Thermite
    I'm trying to validate the values of arguments passed to subsequent mocked method invocations (of the same method), but cannot figure out a valid approach. A generic example follows: public class Foo { [Dependency] public Bar SomeBar { get; set; } public void SomeMethod() { this.SomeBar.SomeOtherMethod("baz"); this.SomeBar.SomeOtherMethod("bag"); } } public class Bar { public void SomeOtherMethod(string input) { } } public class MoqTest { [TestMethod] public void RunTest() { Mock<Bar> mock = new Mock<Bar>(); Foo f = new Foo(); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("baz"))); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("bag"))); // this of course overrides the first call f.SomeMethod(); mock.VerifyAll(); } } Using a Function in the Setup might be an option, but then it seems I'd be reduced to some sort of global variable to know which argument/iteration I'm verifying. Maybe I'm overlooking the obvious within the Moq framework?

    Read the article

  • Pure Server-Side Filtering with RadGridView and WCF RIA Services

    Those of you who are familiar with WCF RIA Services know that the DomainDataSource control provides a FilterDescriptors collection that enables you to filter data returned by the query on the server. We have been using this DomainDataSource feature in our RIA Services with DomainDataSource online example for almost an year now. In the example, we are listening for RadGridViews Filtering event in order to intercept any filtering that is performed on the client and translate it to something that the DomainDataSource will understand, in this case a System.Windows.Data.FilterDescriptor being added or removed from its FilterDescriptors collection. Think of RadGridView.FilterDescriptors as client-side filtering and of DomainDataSource.FilterDescriptors as server-side filtering. We no longer need the client-side one. With the introduction of the Custom Filtering Controls feature many new possibilities have opened. With these custom controls we no longer need to do any filtering on the client. I have prepared a very small project that demonstrates how to filter solely on the server by using a custom filtering control. As I have already mentioned filtering on the server is done through the FilterDescriptors collection of the DomainDataSource control. This collection holds instances of type System.Windows.Data.FilterDescriptor. The FilterDescriptor has three important properties: PropertyPath: Specifies the name of the property that we want to filter on (the left operand). Operator: Specifies the type of comparison to use when filtering. An instance of FilterOperator Enumeration. Value: The value to compare with (the right operand). An instance of the Parameter Class. By adding filters, you can specify that only entities which meet the condition in the filter are loaded from the domain context. In case you are not familiar with these concepts you might find Brad Abrams blog interesting. Now, our requirements are to create some kind of UI that will manipulate the DomainDataSource.FilterDescriptors collection. When it comes to collections, my first choice of course would be RadGridView. If you are not familiar with the Custom Filtering Controls concept I would strongly recommend getting acquainted with my step-by-step tutorial Custom Filtering with RadGridView for Silverlight and checking the online example out. I have created a simple custom filtering control that contains a RadGridView and several buttons. This control is aware of the DomainDataSource instance, since it is operating on its FilterDescriptors collection. In fact, the RadGridView that is inside it is bound to this collection. In order to display filters that are relevant for the current column only, I have applied a filter to the grid. This filter is a Telerik.Windows.Data.FilterDescriptor and is used to filter the little grid inside the custom control. It should not be confused with the DomainDataSource.FilterDescriptors collection that RadGridView is actually bound to. These are the RIA filters. Additionally, I have added several other features. For example, if you have specified a DataFormatString on your original column, the Value column inside the custom control will pick it up and format the filter values accordingly. Also, I have transferred the data type of the column that you are filtering to the Value column of the custom control. This will help the little RadGridView determine what kind of editor to show up when you begin edit, for example a date picker for DateTime columns. Finally, I have added four buttons two of them can be used to add or remove filters and the other two will communicate the changes you have made to the server. Here is the full source code of the DomainDataSourceFilteringControl. The XAML: <UserControl x:Class="PureServerSideFiltering.DomainDataSourceFilteringControl"    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:telerikGrid="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.GridView"     xmlns:telerik="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"     Width="300">     <Border x:Name="LayoutRoot"             BorderThickness="1"             BorderBrush="#FF8A929E"             Padding="5"             Background="#FFDFE2E5">           <Grid>             <Grid.RowDefinitions>                 <RowDefinition Height="Auto"/>                 <RowDefinition Height="150"/>                 <RowDefinition Height="Auto"/>             </Grid.RowDefinitions>               <StackPanel Grid.Row="0"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="addFilterButton"                                   Click="OnAddFilterButtonClick"                                   Content="Add Filter"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="removeFilterButton"                                   Click="OnRemoveFilterButtonClick"                                   Content="Remove Filter"                                   Margin="2"                                   Width="96"/>             </StackPanel>               <telerikGrid:RadGridView Name="filtersGrid"                                     Grid.Row="1"                                     Margin="2"                                     ItemsSource="{Binding FilterDescriptors}"                                     AddingNewDataItem="OnFilterGridAddingNewDataItem"                                     ColumnWidth="*"                                     ShowGroupPanel="False"                                     AutoGenerateColumns="False"                                     CanUserResizeColumns="False"                                     CanUserReorderColumns="False"                                     CanUserFreezeColumns="False"                                     RowIndicatorVisibility="Collapsed"                                     IsFilteringAllowed="False"                                     CanUserSortColumns="False">                 <telerikGrid:RadGridView.Columns>                     <telerikGrid:GridViewComboBoxColumn DataMemberBinding="{Binding Operator}"                                                         UniqueName="Operator"/>                     <telerikGrid:GridViewDataColumn Header="Value"                                                     DataMemberBinding="{Binding Value.Value}"                                                     UniqueName="Value"/>                 </telerikGrid:RadGridView.Columns>             </telerikGrid:RadGridView>               <StackPanel Grid.Row="2"                         Margin="2"                         Orientation="Horizontal"                         HorizontalAlignment="Center">                 <telerik:RadButton Name="filterButton"                                   Click="OnApplyFiltersButtonClick"                                   Content="Apply Filters"                                   Margin="2"                                   Width="96"/>                 <telerik:RadButton Name="clearButton"                                   Click="OnClearFiltersButtonClick"                                   Content="Clear Filters"                                   Margin="2"                                   Width="96"/>             </StackPanel>           </Grid>       </Border> </UserControl>   And the code-behind: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Telerik.Windows.Controls.GridView; using System.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Data;   namespace PureServerSideFiltering {     /// <summary>     /// A custom filtering control capable of filtering purely server-side.     /// </summary>     public partial class DomainDataSourceFilteringControl : UserControl, IFilteringControl     {         // The main player here.         DomainDataSource domainDataSource;           // This is the name of the property that this column displays.         private string dataMemberName;           // This is the type of the property that this column displays.         private Type dataMemberType;           /// <summary>         /// Identifies the <see cref="IsActive"/> dependency property.         /// </summary>         /// <remarks>         /// The state of the filtering funnel (i.e. full or empty) is bound to this property.         /// </remarks>         public static readonly DependencyProperty IsActiveProperty =             DependencyProperty.Register(                 "IsActive",                 typeof(bool),                 typeof(DomainDataSourceFilteringControl),                 new PropertyMetadata(false));           /// <summary>         /// Gets or sets a value indicating whether the filtering is active.         /// </summary>         /// <remarks>         /// Set this to true if you want to lit-up the filtering funnel.         /// </remarks>         public bool IsActive         {             get { return (bool)GetValue(IsActiveProperty); }             set { SetValue(IsActiveProperty, value); }         }           /// <summary>         /// Gets or sets the domain data source.         /// We need this in order to work on its FilterDescriptors collection.         /// </summary>         /// <value>The domain data source.</value>         public DomainDataSource DomainDataSource         {             get { return this.domainDataSource; }             set { this.domainDataSource = value; }         }           public System.Windows.Data.FilterDescriptorCollection FilterDescriptors         {             get { return this.DomainDataSource.FilterDescriptors; }         }           public DomainDataSourceFilteringControl()         {             InitializeComponent();         }           public void Prepare(GridViewBoundColumnBase column)         {             this.LayoutRoot.DataContext = this;               if (this.DomainDataSource == null)             {                 // Sorry, but we need a DomainDataSource. Can't do anything without it.                 return;             }               // This is the name of the property that this column displays.             this.dataMemberName = column.GetDataMemberName();               // This is the type of the property that this column displays.             // We need this in order to see which FilterOperators to feed to the combo-box column.             this.dataMemberType = column.DataType;               // We will use our magic Type extension method to see which operators are applicable for             // this data type. You can go to the extension method body and see what it does.             ((GridViewComboBoxColumn)this.filtersGrid.Columns["Operator"]).ItemsSource                 = this.dataMemberType.ApplicableFilterOperators();               // This is very nice as well. We will tell the Value column its data type. In this way             // RadGridView will pick up the best editor according to the data type. For example,             // if the data type of the value is DateTime, you will be editing it with a DatePicker.             // Nice!             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataType = this.dataMemberType;               // Yet another nice feature. We will transfer the original DataFormatString (if any) to             // the Value column. In this way if you have specified a DataFormatString for the original             // column, you will see all filter values formatted accordingly.             ((GridViewDataColumn)this.filtersGrid.Columns["Value"]).DataFormatString = column.DataFormatString;               // This is important. Since our little filtersGrid will be bound to the entire collection             // of this.domainDataSource.FilterDescriptors, we need to set a Telerik filter on the             // grid so that it will display FilterDescriptor which are relevane to this column ONLY!             Telerik.Windows.Data.FilterDescriptor columnFilter = new Telerik.Windows.Data.FilterDescriptor("PropertyPath"                 , Telerik.Windows.Data.FilterOperator.IsEqualTo                 , this.dataMemberName);             this.filtersGrid.FilterDescriptors.Add(columnFilter);               // We want to listen for this in order to activate and de-activate the UI funnel.             this.filtersGrid.Items.CollectionChanged += this.OnFilterGridItemsCollectionChanged;         }           /// <summary>         // Since the DomainDataSource is a little bit picky about adding uninitialized FilterDescriptors         // to its collection, we will prepare each new instance with some default values and then         // the user can change them later. Go to the event handler to see how we do this.         /// </summary>         void OnFilterGridAddingNewDataItem(object sender, GridViewAddingNewEventArgs e)         {             // We need to initialize the new instance with some values and let the user go on from here.             System.Windows.Data.FilterDescriptor newFilter = new System.Windows.Data.FilterDescriptor();               // This is a must. It should know what member it is filtering on.             newFilter.PropertyPath = this.dataMemberName;               // Initialize it with one of the allowed operators.             // TypeExtensions.ApplicableFilterOperators method for more info.             newFilter.Operator = this.dataMemberType.ApplicableFilterOperators().First();               if (this.dataMemberType == typeof(DateTime))             {                 newFilter.Value.Value = DateTime.Now;             }             else if (this.dataMemberType == typeof(string))             {                 newFilter.Value.Value = "<enter text>";             }             else if (this.dataMemberType.IsValueType)             {                 // We need something non-null for all value types.                 newFilter.Value.Value = Activator.CreateInstance(this.dataMemberType);             }               // Let the user edit the new filter any way he/she likes.             e.NewObject = newFilter;         }           void OnFilterGridItemsCollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)         {             // We are active only if we have any filters define. In this case the filtering funnel will lit-up.             this.IsActive = this.filtersGrid.Items.Count > 0;         }           private void OnApplyFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // Since this.domainDataSource.AutoLoad is false, this will take into             // account all filtering changes that the user has made since the last             // Load() and pull the new data to the client.             this.DomainDataSource.Load();         }           private void OnClearFiltersButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // We want to remove ONLY those filters from the DomainDataSource             // that this control is responsible for.             this.DomainDataSource.FilterDescriptors                 .Where(fd => fd.PropertyPath == this.dataMemberName) // Only "our" filters.                 .ToList()                 .ForEach(fd => this.DomainDataSource.FilterDescriptors.Remove(fd)); // Bye-bye!               // Comment this if you want the popup to stay open after the button is clicked.             this.ClosePopup();               // After we did our housekeeping, get the new data to the client.             this.DomainDataSource.Load();         }           private void OnAddFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Let the user enter his/or her requirements for a new filter.             this.filtersGrid.BeginInsert();             this.filtersGrid.UpdateLayout();         }           private void OnRemoveFilterButtonClick(object sender, RoutedEventArgs e)         {             if (this.DomainDataSource.IsLoadingData)             {                 return;             }               // Find the currently selected filter and destroy it.             System.Windows.Data.FilterDescriptor filterToRemove = this.filtersGrid.SelectedItem as System.Windows.Data.FilterDescriptor;             if (filterToRemove != null                 && this.DomainDataSource.FilterDescriptors.Contains(filterToRemove))             {                 this.DomainDataSource.FilterDescriptors.Remove(filterToRemove);             }         }           private void ClosePopup()         {             System.Windows.Controls.Primitives.Popup popup = this.ParentOfType<System.Windows.Controls.Primitives.Popup>();             if (popup != null)             {                 popup.IsOpen = false;             }         }     } }   Finally, we need to tell RadGridViews Columns to use this custom control instead of the default one. Here is how to do it: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Data; using Telerik.Windows.Data; using Telerik.Windows.Controls; using Telerik.Windows.Controls.GridView;   namespace PureServerSideFiltering {     public partial class MainPage : UserControl     {         public MainPage()         {             InitializeComponent();             this.grid.AutoGeneratingColumn += this.OnGridAutoGeneratingColumn;               // Uncomment this if you want the DomainDataSource to start pre-filtered.             // You will notice how our custom filtering controls will correctly read this information,             // populate their UI with the respective filters and lit-up the funnel to indicate that             // filtering is active. Go ahead and try it.             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("Title", System.Windows.Data.FilterOperator.Contains, "Assistant"));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsGreaterThan, new DateTime(1998, 12, 31)));             this.employeesDataSource.FilterDescriptors.Add(new System.Windows.Data.FilterDescriptor("HireDate", System.Windows.Data.FilterOperator.IsLessThanOrEqualTo, new DateTime(1999, 12, 31)));               this.employeesDataSource.Load();         }           /// <summary>         /// First of all, we will need to replace the default filtering control         /// of each column with out custom filtering control DomainDataSourceFilteringControl         /// </summary>         private void OnGridAutoGeneratingColumn(object sender, GridViewAutoGeneratingColumnEventArgs e)         {             GridViewBoundColumnBase dataColumn = e.Column as GridViewBoundColumnBase;             if (dataColumn != null)             {                 // We do not like ugly dates.                 if (dataColumn.DataType == typeof(DateTime))                 {                     dataColumn.DataFormatString = "{0:d}"; // Short date pattern.                       // Notice how this format will be later transferred to the Value column                     // of the grid that we have inside the DomainDataSourceFilteringControl.                 }                   // Replace the default filtering control with our.                 dataColumn.FilteringControl = new DomainDataSourceFilteringControl()                 {                     // Let the control know about the DDS, after all it will work directly on it.                     DomainDataSource = this.employeesDataSource                 };                   // Finally, lit-up the filtering funnel through the IsActive dependency property                 // in case there are some filters on the DDS that match our column member.                 string dataMemberName = dataColumn.GetDataMemberName();                 dataColumn.FilteringControl.IsActive =                     this.employeesDataSource.FilterDescriptors                     .Where(fd => fd.PropertyPath == dataMemberName)                     .Count() > 0;             }         }     } } The best part is that we are not only writing filters for the DomainDataSource we can read and load them. If the DomainDataSource has some pre-existing filters (like I have created in the code above), our control will read them and will populate its UI accordingly. Even the filtering funnel will light-up! Remember, the funnel is controlled by the IsActive property of our control. While this is just a basic implementation, the source code is absolutely yours and you can take it from here and extend it to match your specific business requirements. Below the main grid there is another debug grid. With its help you can monitor what filter descriptors are added and removed to the domain data source. Download Source Code. (You will have to have the AdventureWorks sample database installed on the default SQLExpress instance in order to run it.) Enjoy!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Adding a CLI for PHP5 on live server

    - by Josua Pedersen
    I want to add command-line support for PHP5 on my server. When I run aptitude install php5-cli I get a message saying that my PHP modules/packages have unmet dependencies. Here is a list of packages that suffer from these "unmet dependencies" and needs and upgrade: php5-gd php5-curl php5-mysql php5-cgi They all depend on php5-common. Can I upgrade the packages just like aptitude suggests without causing any disruptions to the live site? Output from aptitude Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initialising package states... Done The following packages are BROKEN: libapache2-mod-php5 php5-cgi php5-curl php5-gd php5-mysql The following NEW packages will be installed: php5-cli The following packages will be upgraded: php5-common 1 packages upgraded, 1 newly installed, 0 to remove and 123 not upgraded. Need to get 3,511kB of archives. After unpacking 7,803kB will be used. The following packages have unmet dependencies: php5-gd: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-curl: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-mysql: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. php5-cgi: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. libapache2-mod-php5: Depends: php5-common (= 5.3.3-1ubuntu12~lucid) but 5.3.5-1ubuntu7.2ppa1~lucid is to be installed. The following actions will resolve these dependencies: Upgrade the following packages: libapache2-mod-php5 [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-cgi [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-curl [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-gd [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] php5-mysql [5.3.3-1ubuntu12~lucid (now) -> 5.3.5-1ubuntu7.2ppa1~lucid (lucid)] Score is 340

    Read the article

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • What is a good method to solve cabal install problems?

    - by sp3ctum
    I've used the cabal package manager for Haskell programs to install libraries and new projects that I've cloned from some repositories. More often than not, I keep running into problems. Most projects make installing them seem super easy, but in my case that's not always true - sometimes they are very hard to get running. Some are so hard, in fact, that I've lost interest in the project solely because of not being able to install it. So instead of complaining, I'd like to ask what I should do to better this situation. I'd like to use my most recent problem as an example. I'm interested in trying out the Gitit project. It's a promising looking personal wiki that runs on various version control systems. So here's what I've done: Clone from Github run cabal install in the project directory like I'm told on the project install page: mika@eka:~/git/gitit$ ls BLUETRIP-LICENSE CHANGES HCAR-gitit.tex LICENSE Network README.markdown RELANN-0.6.1 Setup.lhs TANGOICONS YUI-LICENSE data expireGititCache.hs gitit.cabal gitit.hs plugins mika@eka:~/git/gitit$ cabal install Resolving dependencies... cabal: cannot configure happstack-server-7.0.7. It requires base64-bytestring ==1.0.* For the dependency on base64-bytestring ==1.0.* there are these packages: base64-bytestring-1.0.0.0. However none of them are available. base64-bytestring-1.0.0.0 was excluded because gitit-0.10 requires base64-bytestring ==0.1.* mika@eka:~/git/gitit$ So now I'm thinking: well, I'll install happstack-server on its own, maybe that will work: mika@eka:~/git/gitit$ cabal install happstack-server Resolving dependencies... Warning: happstack-server.cabal: Ignoring unknown section type: test-suite Configuring happstack-server-7.0.7... cabal: At least the following dependencies are missing: blaze-html ==0.5.*, hslogger >=1.0.2, monad-control ==0.3.*, network >=2.2.3, sendfile >=0.7.1 && <0.8, system-filepath >=0.3.1, text >=0.10 && <0.12, threads >=0.5, transformers-base ==0.4.* cabal: Error: some packages failed to install: happstack-server-7.0.7 failed during the configure step. The exception was: ExitFailure 1 So looks like there are some dependencies missing. But isn't installing these dependencies the whole point of using cabal in the first place? What should I do? File bug reports (to which project?), install the dependencies manually or something else? Bonus points for explaining what causes these kinds of problems.

    Read the article

  • RHEL server Yum dependencies not working

    - by Tony
    I have a redhat server that isnt resolving dependencies correctly. I want to install httpd via yum "yum install httpd" and it installs correctly, but when i go to start httpd I get the following error: /sbin/service httpd restart Stopping httpd: [FAILED] Starting httpd: /usr/sbin/httpd: error while loading shared libraries: libaprutil-1.so.0: cannot open shared object file: No such file or directory [FAILED] It is missing the dependency for apr-util package. Weirdly the i386 package is installed and not the x86_64 package. Can anyone shed any light on why the dependencies might not be resolved correctly? ldd /usr/sbin/httpd libm.so.6 => /lib64/libm.so.6 (0x00002b02370db000) libpcre.so.0 => /lib64/libpcre.so.0 (0x00002b023735e000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00002b023757a000) libaprutil-1.so.0 => not found libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00002b0237793000) libldap-2.3.so.0 => /usr/lib64/libldap-2.3.so.0 (0x00002b02379cb000) liblber-2.3.so.0 => /usr/lib64/liblber-2.3.so.0 (0x00002b0237c06000) libdb-4.3.so => /lib64/libdb-4.3.so (0x00002b0237e14000) libexpat.so.0 => /lib64/libexpat.so.0 (0x00002b0238109000) libapr-1.so.0 => not found libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b023832c000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b0238547000) libc.so.6 => /lib64/libc.so.6 (0x00002b023874c000) libsepol.so.1 => /lib64/libsepol.so.1 (0x00002b0238aa3000) /lib64/ld-linux-x86-64.so.2 (0x00002b0236ebe000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00002b0238ce9000) libsasl2.so.2 => /usr/lib64/libsasl2.so.2 (0x00002b0238eff000) libssl.so.6 => /lib64/libssl.so.6 (0x00002b0239118000) libcrypto.so.6 => /lib64/libcrypto.so.6 (0x00002b0239364000) libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00002b02396b6000) libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00002b02398e4000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00002b0239b79000) libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00002b0239d7c000) libz.so.1 => /usr/lib64/libz.so.1 (0x00002b0239fa1000) libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00002b023a1b5000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00002b023a3be000) yet this is the i386 package apr-util-1.2.7-11.el5.i386 : Apache Portable Runtime Utility library Repo : installed Matched from: Filename : /usr/lib/libaprutil-1.so.0

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

  • What part of SMF is likely broken by a hard power down?

    - by David Mackintosh
    At one of my customer sites, the local guy shut down their local Solaris 10 x86 server, pulled the power inputs, moved it, and now it won’t start properly. It boots and then presents a prompt which lets you log in. This appears to be single user milestone (or equivalent). Digging into it, I think that SMF isn’t permitting the system to go multi-user. SMF was generating a ton of errors on autofs, after some fooling with it I got it to generate errors on inetd and nfs/client instead. This all tells me that the problem is in some SMF state file or database that needs to be fixed/deleted/recreated or something, but I don’t know what the actual issue is. By “generate errors”, I mean that every second I get a message on the console saying “Method or service exit timed out. Killing contract <#.” This makes interacting with the computer difficult. Running svcs –xv shows the service as “enabled”, in state “disabled”, reason “Start method is running”. Fooling with svcadm on the service does nothing, except confirm that the service is not in a Maintenance state. Logs in /lib/svc/log/$SERVICE just tell you that this loop has been happening once per second. Logs in /etc/svc/volatile/$SERVICE confirm that at boot the service is attempted to start, and immediately stopped, no further entries. Note that system-log isn’t starting because system-log depends on autofs so I have no syslog or dmesg. Googling all these terms ends up telling me how to debug/fix either autofs or nfs/client or inetd or rpc/gss (which was the dependency that SMF was using as an excuse to prevent nfs/client from “starting”, it was claiming that rpc/gss was “undefined” which is incorrect since this all used to work. I re-enabled it with inetadm, but inetd still won’t start properly). But I think that the problem is SMF in general, not the individual services. Doing a restore_repository to the “manifest_import” does nothing to improve, or even detectibly change, the situation. I didn’t use a boot backup because the last boot(s) were not useful. I have told the customer that since the valuable data directories are on a separate file system (which fsck’s as clean so it is intact) we could just re-install solaris 10 on the / partition. But that seems like an awfully windows-like solution to inflict on this problem. So. Any ideas what piece is broken and how I might fix it?

    Read the article

  • Error: Multilib version problems found

    - by Kovács Ákos
    What causes this error? Error: Multilib version problems found. This often means that the root cause is something else and multilib version checking is just pointing out that there is a problem. Eg.: 1. You have an upgrade for glibc which is missing some dependency that another package requires. Yum is trying to solve this by installing an older version of glibc of the different architecture. If you exclude the bad architecture yum will tell you what the root cause is (which package requires what). You can try redoing the upgrade with --exclude glibc.otherarch ... this should give you an error message showing the root cause of the problem. 2. You have multiple architectures of glibc installed, but yum can only see an upgrade for one of those arcitectures. If you don't want/need both architectures anymore then you can remove the one with the missing update and everything will work. 3. You have duplicate versions of glibc installed already. You can use "yum check" to get yum show these errors. ...you can also use --setopt=protected_multilib=false to remove this checking, however this is almost never the correct thing to do as something else is very likely to go wrong (often causing much more problems). Protected multilib versions: glibc-2.12-1.107.el6_4.5.i686 != glibc-2.12-1.107.el6_4.2.x86_64 Error: Protected multilib versions: db4-4.7.25-18.el6_4.i686 != db4-4.7.25-17.el6.x86_64 You could try using --skip-broken to work around the problem ** Found 10 pre-existing rpmdb problem(s), 'yum check' output follows: 32:bind-libs-9.8.2-0.17.rc1.el6_4.6.x86_64 is a duplicate with 32:bind-libs-9.8.2-0.17.rc1.el6_4.5.x86_64 chkconfig-1.3.49.3-2.el6_4.1.x86_64 is a duplicate with chkconfig-1.3.49.3-2.el6.x86_64 db4-4.7.25-18.el6_4.x86_64 is a duplicate with db4-4.7.25-17.el6.x86_64 12:dhcp-common-4.1.1-34.P1.el6_4.1.x86_64 is a duplicate with 12:dhcp-common-4.1.1-34.P1.el6.centos.x86_64 glibc-2.12-1.107.el6_4.5.x86_64 is a duplicate with glibc-2.12-1.107.el6_4.2.x86_64 glibc-common-2.12-1.107.el6.x86_64 has missing requires of glibc = ('0', '2.12', '1.107.el6') glibc-common-2.12-1.107.el6_4.2.x86_64 is a duplicate with glibc-common-2.12-1.107.el6.x86_64 glibc-common-2.12-1.107.el6_4.5.x86_64 is a duplicate with glibc-common-2.12-1.107.el6_4.2.x86_64 krb5-libs-1.10.3-10.el6_4.6.x86_64 is a duplicate with krb5-libs-1.10.3-10.el6_4.4.x86_64 tzdata-2013g-1.el6.noarch is a duplicate with tzdata-2013c-2.el6.noarch

    Read the article

  • Package problems after upgrade

    - by Dan
    I installed a new upgrade on ubuntu which seemed to fail near the end. Now I'm being told that an error has occurred and to please run apt-get to see what's wrong. After some further tries with that I eventually gave up. It seems there's a left over latex package(?) somewhere and I can't seem to get rid of or fix it. Here's an example: blank@ubuntu:~$ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: tex-common : Breaks: texlive-common (< 2010) but 2009-15 is installed texlive-base : Depends: texlive-binaries (>= 2012.20120516) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-doc-base : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-extra-utils : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed texlive-font-utils : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-generic-recommended : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-base : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-base-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-extra : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-extra-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-recommended : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-recommended-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-luatex : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pictures : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pictures-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pstricks : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pstricks-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed zlib1g : Breaks: texlive-binaries (< 2009-12) but 2009-11ubuntu2 is installed zlib1g:i386 : Breaks: texlive-binaries (< 2009-12) but 2009-11ubuntu2 is installed E: Unmet dependencies. Try using -f. I've seen similar errors on the site here, but not close enough that i could get it fixed. Any help would be great, Thanks

    Read the article

  • 404 when doing safe-upgrade in lucid 64 box?

    - by Millisami
    Why I see 404 when doing sudo aptitude safe-upgrade in my lucid 64 box? deploy@li167-251:~$ sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-threaded-dev apache2-utils apache2.2-bin apache2.2-common apt apt-utils base-files binutils bzip2 dpkg dpkg-dev gzip ifupdown krb5-multidev language-pack-en language-pack-en-base language-selector-common libatk1.0-0 libatk1.0-dev libavahi-client3 libavahi-common-data libavahi-common3 libbz2-1.0 libc-bin libc-dev-bin libc6 libc6-dev libc6-i686 libcups2 libfreetype6 libfreetype6-dev libglib2.0-0 libglib2.0-dev libgssapi-krb5-2 libgssrpc4 libgtk2.0-0 libgtk2.0-common libgtk2.0-dev libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev libmysqlclient-dev libmysqlclient16 libnotify-dev libnotify1 libpam-modules libpam-runtime libpam0g libparted0debian1 libpng12-0 libpng12-dev libpq-dev libpq5 libssl-dev libssl0.9.8 libtiff4 libudev0 libusb-0.1-4 linux-libc-dev mountall mysql-client mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openssh-client openssh-server openssl parted python-apt sudo tzdata udev upstart ureadahead wget xulrunner-1.9.2 xulrunner-1.9.2-dev The following packages are RECOMMENDED but will NOT be installed: colibri debhelper fakeroot hicolor-icon-theme libatk1.0-data libglib2.0-data libgtk2.0-bin libhtml-template-perl manpages-dev notification-daemon notify-osd ssl-cert xauth xfce4-notifyd 88 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 85.8MB of archives. After unpacking 1712kB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Get:1 http://security.ubuntu.com/ubuntu/ lucid-updates/main libpam-modules 1.1.1-2ubuntu5 [358kB] Get:2 http://security.ubuntu.com/ubuntu/ lucid-updates/main base-files 5.0.0ubuntu20.10.04.2 [70.2kB] Get:3 http://security.ubuntu.com/ubuntu/ lucid-updates/main gzip 1.3.12-9ubuntu1.1 [102kB] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc-bin 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6-i686 2.11.1-0ubuntu7.2 .........

    Read the article

  • Partition/install issues

    - by jalal ahmad
    I am new to Ubuntu and tried to install 10.1 as dual boot option from a USB. At first I encountered the error when in partition dialogue of installation process that cannot find root directory. I did a search on Ubuntu forums and did this as in one of the posts. Make sure that the partition file system you wish to install Linux, Ubuntu or Backtrack on it is ext4, ext3 or ext2, and not FAT32 or NTFS. Then mount / on it: During the installation process press "change" on the partition you wish to use Make sure "do not use this partition" scroll is not chosen, scroll to ext4, ext3 or ext2 On the "mount" field write / Click ok, then next a message will appear saying something like "swap area was not defined, do you wish to continue or choose a swap area?", click "ok" and continue or click "go back" and choose another partition and click change, on the file system scroll choose "swap" and click "ok" and next All good but when I rebooted I could not find Windows vista as in dual boot option. Plus I could not see wireless networks and in the process of trying to find out what went wrong the soft switch somehow turned off and as I cannot boot in Windows I have no idea what to do. Again searching internet I found a post which said the dual boot problem can be overcome by installing gparted but when I tried I got the message Reading package lists... Done Building dependency tree Reading state information.. Done E: Couldn't find package gparted I thought I am going to copy my stuff from my hard disk and try to install Windows but I found out that I have two partitions which are different from what I had before installing Ubuntu. I now have filesystem partition1 119 GB ext4, swap partition 5 1.1 GB swap and extended partition 2 1.1 GB. And I cannot mount 119 GB where all my personal videos, photos are if still there. Now I cannot boot from Windows even. Need help on what to do? Best case scenario would be to be able to copy my stuff before I mess up the system further. Else a dual boot system and if not then how do I install vista again. I have Windows CD. Cheers guys and thanks in advance.

    Read the article

  • Network Services disabled (not starting) on Windows XP

    - by Rickesh John
    I am currently running Windows XP Service Pack 3 on my system. But today, when I failed to connect to the internet, via a LAN cable, I realized that almost all of the vital network services had stopped functioning. Any attempts to start it through services.msc gives me the following message: Could not start the DNS Client Service on Local Computer Error 1068: The dependency service group failed to start All my software or services that are related to networking have stopped functioning, for example, Windows Firewall is turned off permanently, so is my Avast Anti-Virus' service of Real Time Shields and Web Shield. When I insert the LAN wire into my laptop, it registers itself, but this is what I get when I do a ping localhost C:>ping localhost Unable to contact IP driver, error code 2 Moveover, with ipconfig I get this : Windows IP Configuration An internal error occurred: The request is not supported. Please contact Microsoft Product Support Services for further help. Additional Information: Unable to query host name On some further poking around, I saw that none of the "NETWORK SERVICE" process in task manager, except svchost.exe were running. Also, when I first opened the task manager, I saw some 20 processes running with username column empty for most of them. With some search in Google, I found out that these services were important, DHCP DNS Net logon Network connection Network location Awareness TCP/IP Net BIOS Helper none of them, except Network Connections are working, they do not start. The event viewer of my system shows a bunch of 7000 and 7001 event errors. I have tried re installing the network driver, booting in safe mode with networking and tried to enable those services mentioned above. I had disabled System Restore some time back, so I have no restore points for my system. I tried a lot of things from Google searches but none of them worked. Also, with such a long list of issue, I am a little confused as to what should I search on the internet. :( One more thing I would like to mention, previous morning, my anti-virus Avast detected a RootKit buried deep in my system folders. It was removed, but maybe this was a problem caused by the root kit. I did run a boot-time scan but no viruses were found. Please please please advice. Is formatting and re-installation of Windows my only option?

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >