Search Results

Search found 8850 results on 354 pages for 'libreoffice base'.

Page 81/354 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • Friday Fun: Omega Crisis

    - by Mysticgeek
    Friday is here once again and it’s time to play a fun flash game on company time. Today we take a look at the space shooter Omega Crisis. Omega Crisis At the start of the game you’re given the basic story of the game, defending the space outpost, and instructions on how to play. Controls are easy, just target the enemy and use the left mouse button to fire. After each level you’re shown the results and how many Tech Points you’ve earned. The more Tech Points you earn, you have a better chance of upgrading your weapons and base defense before the next level.   You can also go into Manage Mode by hitting the Space bar, and select gunners and other types of weapons to help defend the outpost. Choose your mission from the timeline after successfully completing a mission. You can also use A,W,D,S to move around the map and see exactly where the enemy ships are coming from. This makes it easier to destroy them before they get too close to your base. This game is a lot of fun and is similar to different “Desktop Defense” type games. If you’re looking for a fun way to waste the afternoon, and not look at TPS reports, Omega Crisis can get you though until the whistle blows. Play Omega Crisis Similar Articles Productive Geek Tips Friday Fun: Portal, the Flash VersionFriday Fun: Play Bubble QuodFriday Fun: Gravitee 2Friday Fun: Wake Up the BoxFriday Fun: Compulse TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • Python — Time complexity of built-in functions versus manually-built functions in finite fields

    - by stackuser
    Generally, I'm wondering about the advantages versus disadvantages of using the built-in arithmetic functions versus rolling your own in Python. Specifically, I'm taking in GF(2) finite field polynomials in string format, converting to base 2 values, performing arithmetic, then output back into polynomials as string format. So a small example of this is in multiplication: Rolling my own: def multiply(a,b): bitsa = reversed("{0:b}".format(a)) g = [(b<<i)*int(bit) for i,bit in enumerate(bitsa)] return reduce(lambda x,y: x+y,g) Versus the built-in: def multiply(a,b): # a,b are GF(2) polynomials in binary form .... return a*b #returns product of 2 polynomials in gf2 Currently, operations like multiplicative inverse (with for example 20 bit exponents) take a long time to run in my program as it's using all of Python's built-in mathematical operations like // floor division and % modulus, etc. as opposed to making my own division, remainder, etc. I'm wondering how much of a gain in efficiency and performance I can get by building these manually (as shown above). I realize the gains are dependent on how well the manual versions are built, that's not the question. I'd like to find out 'basically' how much advantage there is over the built-in's. So for instance, if multiplication (as in the example above) is well-suited for base 10 (decimal) arithmetic but has to jump through more hoops to change bases to binary and then even more hoops in operating (so it's lower efficiency), that's what I'm wondering. Like, I'm wondering if it's possible to bring the time down significantly by building them myself in ways that maybe some professionals here have already come across.

    Read the article

  • Using an Apt Repository for Paid Software Updates

    - by Scott Warren
    I'm trying to determine a way to distribute software updates for a hosted/on-site web application that may have weekly and/or monthly updates. I don't want the customers who use the on-site product to have to worry about updating it manually I just want it to download and install automatically ala Google Chrome. I'm planning on providing an OVF file with Ubuntu and the software installed and configured. My first thought on how to distributed software is to create six Apt repositories/channels (not sure which would be better at this point) that will be accessed through SSH using keys so if a customer doesn't renew their subscription we can disable their account: Beta - Used internally on test data to check the package for major defects. Internal - Used internally on live data to check the package for defects (dog fooding stage). External 1 - Deployed to 1% of our user base (randomly selected) to check for defects. External 9 - Deployed to 9% of our user base (ramdonly selected) to check for defects. External 90 - Deployed to the remaining 90% of users. Hosted - Deployed to the hosted environment. It will take a sign off at each stage to move into the next repository in case problems are reported. My questions to the community are: Has anyone tried something like this before? Can anyone see a downside to this type of a procedure? Is there a better way?

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by Joel
    I cant install or uppdate anything on my system 12.04 I get the error... installArchives() failed: dpkg: error processing libqt4-xmlpatterns (--configure): libqt4-xmlpatterns:amd64 4:4.8.1-0ubuntu4.2 cannot be configured because libqt4-xmlpatterns:i386 is in a different version (4:4.8.1-0ubuntu4.3) dpkg: error processing libqt4-xmlpatterns:i386 (--configure): libqt4-xmlpatterns:i386 4:4.8.1-0ubuntu4.3 cannot be configured because libqt4-xmlpatterns:amd64 is in a different version (4:4.8.1-0ubuntu4.2) dpkg: dependency problems prevent configuration of libqt4-declarative:i386: libqt4-declarative:i386 depends on libqt4-xmlpatterns (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-xmlpatterns:i386 is not configured yet. dpkg: error processing libqt4-declarative:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-declarative: libqt4-declarative depends on libqt4-xmlpatterns (= 4:4.8.1-0ubuntu4.3); however: Version of libqt4-xmlpatterns on system is 4:4.8.1-0ubuntu4.2. dpkg: error processing libqt4-declarative (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqtgui4:i386: libqtgui4:i386 depends on libqt4-declarative (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-declarative:i386 is not configured yet. dpkg: error processing libqtgui4:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqtgui4: No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already libqtgui4 depends on libqt4-declarative (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-declarative is not configured yet. dpkg: error processing libqtgui4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-designer: libqt4-designer depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-designer (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-designer:i386: libqt4-designer:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-designer:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-opengl: libqt4-opengl depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-opengl (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-opengl:i386: libqt4-opengl:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-opengl:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-qt3support: libqt4-qt3support depends on libqt4-designer (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-designer is not configured yet. libqt4-qt3support depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-qt3support (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-qt3support:i386: libqt4-qt3support:i386 depends on libqt4-designer (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-designer:i386 is not configured yet. libqt4-qt3support:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-qt3support:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-scripttools:i386: libqt4-scripttools:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-scripttools:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-svg: libqt4-svg depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-svg (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-svg:i386: libqt4-svg:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-svg:i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libqt4-xmlpatterns libqt4-xmlpatterns:i386 libqt4-declarative:i386 libqt4-declarative libqtgui4:i386 libqtgui4 libqt4-designer libqt4-designer:i386 libqt4-opengl libqt4-opengl:i386 libqt4-qt3support libqt4-qt3support:i386 libqt4-scripttools:i386 libqt4-svg libqt4-svg:i386 Error in function: dpkg: dependency problems prevent configuration of libqt4-declarative: libqt4-declarative depends on libqt4-xmlpatterns (= 4:4.8.1-0ubuntu4.3); however: Version of libqt4-xmlpatterns on system is 4:4.8.1-0ubuntu4.2. dpkg: error processing libqt4-declarative (--configure): dependency problems - leaving unconfigured dpkg: error processing libqt4-xmlpatterns (--configure): libqt4-xmlpatterns:amd64 4:4.8.1-0ubuntu4.2 cannot be configured because libqt4-xmlpatterns:i386 is in a different version (4:4.8.1-0ubuntu4.3) dpkg: error processing libqt4-xmlpatterns:i386 (--configure): libqt4-xmlpatterns:i386 4:4.8.1-0ubuntu4.3 cannot be configured because libqt4-xmlpatterns:amd64 is in a different version (4:4.8.1-0ubuntu4.2) dpkg: dependency problems prevent configuration of libqtgui4: libqtgui4 depends on libqt4-declarative (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-declarative is not configured yet. dpkg: error processing libqtgui4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-declarative:i386: libqt4-declarative:i386 depends on libqt4-xmlpatterns (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-xmlpatterns:i386 is not configured yet. dpkg: error processing libqt4-declarative:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-svg: libqt4-svg depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-svg (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-opengl: libqt4-opengl depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-opengl (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-designer: libqt4-designer depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-designer (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-qt3support: libqt4-qt3support depends on libqt4-designer (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-designer is not configured yet. libqt4-qt3support depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4 is not configured yet. dpkg: error processing libqt4-qt3support (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqtgui4:i386: libqtgui4:i386 depends on libqt4-declarative (= 4:4.8.1-0ubuntu4.3); however: Package libqt4-declarative:i386 is not configured yet. dpkg: error processing libqtgui4:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-svg:i386: libqt4-svg:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-svg:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-opengl:i386: libqt4-opengl:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.3); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-opengl:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-designer:i386: joel@Joel-PC:~$ sudo apt-get install -f [sudo] password for joel: Läser paketlistor... Färdig Bygger beroendeträd Läser tillståndsinformation... Färdig Korrigerar beroenden.... Färdig Följande paket har installerats automatiskt och är inte längre nödvändiga: kde-l10n-sv language-pack-kde-sv-base language-pack-kde-zh-hans-base calligra-l10n-engb calligra-l10n-sv calligra-l10n-zhcn language-pack-kde-en kde-l10n-engb language-pack-kde-sv language-pack-zh-hans-base kde-l10n-zhcn language-pack-zh-hans language-pack-kde-zh-hans language-pack-kde-en-base Använd "apt-get autoremove" för att ta bort dem. Följande ytterligare paket kommer att installeras: libqt4-xmlpatterns Följande paket kommer att uppgraderas: libqt4-xmlpatterns 1 att uppgradera, 0 att nyinstallera, 0 att ta bort och 22 att inte uppgradera. 15 är inte helt installerade eller borttagna. Behöver hämta 0 B/1 033 kB arkiv. Efter denna åtgärd kommer ytterligare 0 B utrymme användas på disken. Vill du fortsätta [J/n]? J dpkg: fel vid hantering av libqt4-xmlpatterns (--configure): libqt4-xmlpatterns:amd64 4:4.8.1-0ubuntu4.2 cannot be configured because libqt4-xmlpatterns:i386 is in a different version (4:4.8.1-0ubuntu4.3) dpkg: fel vid hantering av libqt4-xmlpatterns:i386 (--configure): libqt4-xmlpatterns:i386 4:4.8.1-0ubuntu4.3 cannot be configured because libqt4-xmlpatterns:amd64 is in a different version (4:4.8.1-0ubuntu4.2) dpkg: beroendeproblem förhindrar konfigurering av libqt4-declarative:i386: libqt4-declarative:i386 är beroende av libqt4-xmlpatterns (= 4:4.8.1-0ubuntu4.3), men: Paketet libqt4-xmlpatterns:i386 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-declarative:i386 (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-declarative: libqt4-declarative är beroende av libqt4-xmlpatterns (= 4:4.8.1-0ubuntu4.3), men: Versionen av libqt4-xmlpatterns på systemet är 4:4.8.1-0ubuntu4.2. dpkg: fel vid hantering av libqt4-declarative (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqtgui4:i386: libqtgui4:i386 är beroende av libqt4-declarative (= 4:4.8.1-0ubuntu4.3), men: Paketet libqt4-declarative:i386 har inte konfigurerats ännu. dpkg: fel vid hantering av libqtgui4:i386 (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar koIngen apport-rapport skrevs därför att felmeddelandet indikerar att det är ett efterföljande fel från ett tidigare problem. Ingen apport-rapport skrevs därför att felmeddelandet indikerar att det är ett efterföljande fel från ett tidigare problem. Ingen apport-rapport skrevs därför att felmeddelandet indikerar att det är ett efterföljande fel från ett tidigare problem. Ingen apport-rapport skrevs därför att felmeddelandet indikerar att det är ett efterföljande fel från ett tidigare problem. Ingen apport-rapport skrevs därför att felmeddelandet indikerar att det är ett efterföljande fel från ett tidigare problem. Ingen apport-rapport skrevs därför att felmeddelandet indikerar att det är ett efterföljande fel från ett tidigare problem. nfigurering av libqtgui4: libqtgui4 är beroende av libqt4-declarative (= 4:4.8.1-0ubuntu4.3), men: Paketet libqt4-declarative har inte konfigurerats ännu. dpkg: fel vid hantering av libqtgui4 (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-designer: libqt4-designer är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-designer (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-designer:i386: libqt4-designer:i386 är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4:i386 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-designer:i386 (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-opengl: libqt4-opengl är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-opengl (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-opengl:i386: libqt4-opengl:i386 är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4:i386 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-opengl:i386 (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-qt3support: libqt4-qt3support är beroende av libqt4-designer (= 4:4.8.1-0ubuntu4.3), men: Paketet libqt4-designer har inte konfigurerats ännu. libqt4-qt3support är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-qt3support (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-qt3support:i386: libqt4-qt3support:i386 är beroende av libqt4-designer (= 4:4.8.1-0ubuntu4.3), men: Paketet libqt4-designer:i386 har inte konfigurerats ännu. libqt4-qt3support:i386 är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4:i386 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-qt3support:i386 (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-scripttools:i386: libqt4-scripttools:i386 är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4:i386 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-scripttools:i386 (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-svg: libqt4-svg är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-svg (--configure): beroendeproblem - lämnar okonfigurerad dpkg: beroendeproblem förhindrar konfigurering av libqt4-svg:i386: libqt4-svg:i386 är beroende av libqtgui4 (= 4:4.8.1-0ubuntu4.3), men: Paketet libqtgui4:i386 har inte konfigurerats ännu. dpkg: fel vid hantering av libqt4-svg:i386 (--configure): beroendeproblem - lämnar okonfigurerad Fel uppstod vid hantering: libqt4-xmlpatterns libqt4-xmlpatterns:i386 libqt4-declarative:i386 libqt4-declarative libqtgui4:i386 libqtgui4 libqt4-designer libqt4-designer:i386 libqt4-opengl libqt4-opengl:i386 libqt4-qt3support libqt4-qt3support:i386 libqt4-scripttools:i386 libqt4-svg libqt4-svg:i386 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Wire Framing WP7 Apps With Cacoo

    - by Tim Murphy
    While looking for a free alternative to Sketchflow I landed on the Cacoo web site.  Any developer who decides to use the free Visual Studio tools may find themselves doing the same search.  The base functionality of Cacoo is free although there are certain features that have fees attached to them such as extended stencils and templates. Cacoo doesn’t seem to have a template for WP7.  It does have templates for iOS and Android development so I started with the Android template and started modidfying it for WP7.  Funny thing is since Android has the same hardware vendors as Windows Phone the basic frame looks just right (I would swear I was looking at my Samsung Focus). Below is the start of a new mockup for the user group that I help run. I found that while Cacoo doesn’t have all the icons I need I am able to insert them from the Windows Phone Toolkit folder.  If I put them off to the side as you can see above.  I can simply copy and paste them into the appropriate place as needed.  Beyond that I have customized the main frame frame so I can have my base to work from.  In the future I intend to create this as a stencil and if it looks good enough I would consider making it public. My use of this product is still in it’s early phase, but it seems like a great way to start.  Maybe if you use this to get going you can earn enough from your resulting apps to pay for something with more bells and whistles in the future. del.icio.us Tags: WP7,Windows Phone 7 development,design,Cacoo,wire frame

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • Announcing the MOS WCI "Community"

    - by brian.harrison
    The WCI Technical Support team are please to announce the launch of the long awaited WCI Support Community on My Oracle Support (MOS) "Community". Users can navigate to this "first stop" for WebCenter Interaction information by logging on to following this link: WCI Community (Note that this requires a valid login credential to the My Oracle Support tool). In this community you'll find a product related discussion forum moderated by Oracle WebCenter Interaction support engineers, recommended tips and tricks, links to knowledge base articles and best practices for setting up and administering up your environment. We hope you'll take a minute to have a look through the community. If you have a question about WebCenter Interaction, a comment or a suggestion regarding the content, please feel free to post it to the forum and someone will respond to your request. Think of the forum here as another method to communicate directly with the WCI Technical Support team for questions and answers to simple WCI support topics. The forum is moderated by WCI Technical Support engineers directly and we hope it will help you avoid the need to log support incidents for less complex support related questions. We encourage all of our customers, both internal and external, to participate in the forums discussions, sharing information, knowledge, best practices and in the effort to help us build a vital and vibrant "home base" for WCI users on the My Oracle Support tool. Thank you for visiting! The WebCenter Interaction Support Community Moderator Team

    Read the article

  • MySQL Connect Conference: My Experience

    - by Hema Sridharan
    It was a great experience to attend the MySQL Connect Conference for the first time ever. Personally I was very much enthralled to present about "How to make MySQL Backups" besides attending different sessions to absorb more knowledge about the technical prospects of MySQL. One of the agenda items in my presentation was "MySQL Enterprise Backup" functionality and features. There were total of 40 attendees in the session, who were very much interested about the MySQL Enterprise Backup product and gave positive feedback as well as areas of improvements on our product. Some of our features brought lot of excitement and smile amongst our customers including,1. Performance improvements in MEB 3.8.02. Incremental Base option from MEB 3.7.1 where there is no need to specify the directory name of the previous backup to fetch the lsn values and instead can directly fetch from backup_history table using --incremental-base=history: last_backup3. only-innodb-with-frm option introduced in MEB 3.7 version. A true online hot backup of InnoDB tables.I also attended a session with similar topic "MEB Best Practices" conducted by Sanjay Manwani, where he double clicked all the features and best strategies of backup & restore. I also got an opportunity to attend other sessions including,* Enabling the new generation of web and cloud services with MySQL 5.6 replication* Getting the most out of MySQL with MySQL Workbench* InnoDB compression for OLTP* Scaling for the Web and Cloud with MySQL replication.Above all, had some special moments in the conference including meeting some of the executives / colleagues for the first time f2f. On a whole, the first MySQL Connect conference was a great success in terms of manifesting the features of our products, direct feedback from customer and team building.  We also had some applauding yahoo moments when Tomas Ulin announced different releases including MySQL 5.6 RC, Connector Python 1.0 and ODBC 5.2 release, MySQL Cluster 7.3, additions to MySQL Enterprise edition etc.

    Read the article

  • Ruby on Rails - Belongs_to and Active Admin not creating foreign ID

    - by Milo
    I have the following setup: class Category < ActiveRecord::Base has_many :products end class Product < ActiveRecord::Base belongs_to :category has_attached_file :photo, :styles => { :medium => "300x300>", :thumb => "200x200>" } validates_attachment_content_type :photo, :content_type => /\Aimage\/.*\Z/ end ActiveAdmin.register Product do permit_params :title, :price, :slideshow, :photo, :category form do |f| f.inputs "Product Details" do f.input :title f.input :category f.input :price f.input :slideshow f.input :photo, :required => false, :as => :file end f.actions end show do |ad| attributes_table do row :title row :category row :photo do image_tag(ad.photo.url(:medium)) end end end index do column :id column :title column :category column :price do |product| number_to_currency product.price end actions end class ProductController < ApplicationController def create @product = Product.create(params[:id]) end end Every time I make a product item in activeadmin the category comes up empty. It wont populate the column category_id for the product. It just leaves it empty. What am I doing wrong?

    Read the article

  • How to mark dependencies as solved?

    - by joo
    On my Ubuntu I needed to install a newer version of erlang. Then I installed rabbitmq-server with dpkg --force-depends -i rabbitmq-server_2.1.1-1_all.deb And everything worked fine, till... Now I have the following problem when doing an apt-get install or upgrade: rabbitmq-server: Depends: erlang-base (>= 1:12.b.3) but it is not installable or erlang-base-hipe (>= 1:12.b.3) but it is not installable Depends: erlang-ssl which is a virtual package. or erlang-nox (< 1:13.b-dfsg1-1) but it is not installable Depends: erlang-os-mon which is a virtual package. or erlang-nox (< 1:13.b-dfsg1-1) but it is not installable Depends: erlang-mnesia which is a virtual package. or erlang-nox (< 1:13.b-dfsg1-1) but it is not installable Depends: erlang-inets which is a virtual package. or erlang-nox (< 1:13.b-dfsg1-1) but it is not installable Remove the following packages: rabbitmq-server Score is 121 Accept this solution? [Y/n/q/?] What command tells apt to resolve dependencies without removing the package? Thanks a lot in advance...

    Read the article

  • Delete eth0 avahi from the ifconfig list

    - by sai
    Hello this is the response I get from ifconfig. Now I have two eth0 things being showed up. I need to delete the second one which says eth0:avahi. I posted my ifconfig's response on a site as I has problem using wired internet, and they suggested to remove the eth0 avahi, to get internet. But I am a newbie to linux networking and have no idea how to delete this. response for ifconfig eth0 Link encap:Ethernet HWaddr 18:a9:05:22:cd:f9 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:28 Base address:0x4000 eth0:avahi Link encap:Ethernet HWaddr 18:a9:05:22:cd:f9 inet addr:169.254.10.43 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 Interrupt:28 Base address:0x4000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:796 errors:0 dropped:0 overruns:0 frame:0 TX packets:796 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:64016 (64.0 KB) TX bytes:64016 (64.0 KB) wlan0 Link encap:Ethernet HWaddr 00:26:82:3c:ac:27 inet6 addr: fe80::226:82ff:fe3c:ac27/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:52142 errors:0 dropped:0 overruns:0 frame:0 TX packets:30404 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:60816983 (60.8 MB) TX bytes:4160159 (4.1 MB)

    Read the article

  • How to turn on/off code modules?

    - by Safran Ali
    I am trying to run multiple sites using single code base and code base consist of the following module (i.e. classes) User module Q & A module Faq module and each class works on MVC pattern i.e. it consist of Entity class Helper class (i.e. static class) View (i.e. pages and controls) and let's say I have 2 sites site1.com and site2.com. And I am trying to achieve following functionality site1.com can have User, Q & A and Faq module up and running site2.com can have User and Q & A module live while Faq module is switched off but it can be turned-on if needed, so my query here is what is the best way to achieve such functionality Do I introduce a flag bit that I check on every page and control belonging to that module? It's more like CMS where you can turn on/off different features. I am trying to get my head around it, please provide me with an example or point out if I am taking the wrong approach.

    Read the article

  • Character progression through leveling, skills or items?

    - by Anton
    I'm working on a design for an RPG game, and I'm having some doubts about the skill and level system. I'm going for a more casual, explorative gaming experience and so thought about lowering the game complexity by simplifying character progression. But I'm having trouble deciding between the following: Progression through leveling, no complex skill progression, leveling increases base stats. Progression through skills, no leveling or base stat changes, skills progress through usage. Progression through items, more focus on stat-changing items, items confer skills, no leveling. However, I'm uncertain what the effects on gameplay might be in the end. So, my question is this: What would be the effects of choosing one of the above alternatives over the others? (Particularly with regards to the style and feel of the gameplay) My take on it is that the first sacrifices more frequent rewards and customization in favor of a simpler gameplay; the second sacrifices explicit customization and player control in favor of more frequent rewards and a somewhat simpler gameplay; while the third sacrifices inventory simplicity and a player metric in favor of player control, customization and progression simplicity. Addendum: I'm not really limiting myself to the above three, they are just the ones I liked most and am primarily interested in.

    Read the article

  • Debugging/Logging Techniques for End Users

    - by James Burgess
    I searched a bit, but didn't find anything particularly pertinent to my problem - so please do excuse me if I missed something! A few months back I inherited the source to a fairly-popular indie game project and have been working, along with another developer, on the code-base. We recently made our first release since taking over the development but we're a little stuck. A few users are experiencing slowdowns/lagging in the current version, as compared to the previous version, and we are not able to reproduce these issues in any of our various development environments (debug, release, different OSes, different machines, etc.). What I'd like to know is how can we go about implementing some form of logging/debugging mechanism into the game, that users can enable and send the reports to us for examination? We're not able to distribute debug binaries using the MSVS 2010 runtimes, due to the licensing - and wouldn't want to, for a variety of reasons. We'd really like to get to the bottom of this issue, even if just to find out it's nothing to do with our code base but everything to do with their system configuration. At the moment, we just have no leads - and the community isn't a very technically-savvy one, so we're unable to rely on 'expert' bug reports or investigations. I've seen the debug logging mechanism used in other applications and games for everything from logging simple errors to crash dumps. We're really at a loss at this stage as to how to address these issues, having been over every commit to the repository from the previous to the current version and not finding any real issues.

    Read the article

  • RUEI 12.1.0.3.0 dependency requirement for php-soap-5.1.6

    - by sthieme
    Dear Readers,please be aware of the new php-soap-5.1.6 dependency in RUEI 12.1.0.3.For a swift upgrade to RUEI 12.1.0.3 you should be aware of this pre-requisite as it can be a time-eater to obtain individual rpm-packages inside of a datacenter for an old OS revision once you have started the upgrade process. You may use the following procedure to retrieve the required package via http://public-yum.oracle.com:Customers will have to check the /etc/issue, /etc/issue.net (or /etc/redhat-release for RHEL based OS) for their current release in order to obtain the fitting package version.Customers of OEL can download the packages from our public-yum.oracle.com Server: http://public-yum.oracle.com/repo/,  e.g. http://public-yum.oracle.com/repo/OracleLinux/OL5/8/base/x86_64/php-soap-5.1.6-32.el5.x86_64.rpmEarlier releases (up to 5.5) are located under the EnterpriseLinux instead of OracleLinux path, e.g.http://public-yum.oracle.com/repo/EnterpriseLinux/EL5/5/base/x86_64/php-soap-5.1.6-27.el5.x86_64.rpmNote: you will have to obtain the relevant RedHat rpm-packages via the login protected RHN URLs. Oracle can only provide support for Oracle Enterprise Linux and RHEL packages are not available publicly via rpm-seek.com to my knowledge. Kind regards,Stefan

    Read the article

  • Getting Started with TypeScript – Classes, Static Types and Interfaces

    - by dwahlin
    I had the opportunity to speak on different JavaScript topics at DevConnections in Las Vegas this fall and heard a lot of interesting comments about JavaScript as I talked with people. The most frequent comment I heard from people was, “I guess it’s time to start learning JavaScript”. Yep – if you don’t already know JavaScript then it’s time to learn it. As HTML5 becomes more and more popular the amount of JavaScript code written will definitely increase. After all, many of the HTML5 features available in browsers have little to do with “tags” and more to do with JavaScript (web workers, web sockets, canvas, local storage, etc.). As the amount of JavaScript code being used in applications increases, it’s more important than ever to structure the code in a way that’s maintainable and easy to debug. While JavaScript patterns can certainly be used (check out my previous posts on the subject or my course on Pluralsight.com), several alternatives have come onto the scene such as CoffeeScript, Dart and TypeScript. In this post I’ll describe some of the features TypeScript offers and the benefits that they can potentially offer enterprise-scale JavaScript applications. It’s important to note that while TypeScript has several great features, it’s definitely not for everyone or every project especially given how new it is. The goal of this post isn’t to convince you to use TypeScript instead of standard JavaScript….I’m a big fan of JavaScript. Instead, I’ll present several TypeScript features and let you make the decision as to whether TypeScript is a good fit for your applications. TypeScript Overview Here’s the official definition of TypeScript from the http://typescriptlang.org site: “TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source.” TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. To sum it up, TypeScript is a new language that can be compiled to JavaScript much like alternatives such as CoffeeScript or Dart. It isn’t a stand-alone language that’s completely separate from JavaScript’s roots though. It’s a superset of JavaScript which means that standard JavaScript code can be placed in a TypeScript file (a file with a .ts extension) and used directly. That’s a very important point/feature of the language since it means you can use existing code and frameworks with TypeScript without having to do major code conversions to make it all work. Once a TypeScript file is saved it can be compiled to JavaScript using TypeScript’s tsc.exe compiler tool or by using a variety of editors/tools. TypeScript offers several key features. First, it provides built-in type support meaning that you define variables and function parameters as being “string”, “number”, “bool”, and more to avoid incorrect types being assigned to variables or passed to functions. Second, TypeScript provides a way to write modular code by directly supporting class and module definitions and it even provides support for custom interfaces that can be used to drive consistency. Finally, TypeScript integrates with several different tools such as Visual Studio, Sublime Text, Emacs, and Vi to provide syntax highlighting, code help, build support, and more depending on the editor. Find out more about editor support at http://www.typescriptlang.org/#Download. TypeScript can also be used with existing JavaScript frameworks such as Node.js, jQuery, and others and even catch type issues and provide enhanced code help. Special “declaration” files that have a d.ts extension are available for Node.js, jQuery, and other libraries out-of-the-box. Visit http://typescript.codeplex.com/SourceControl/changeset/view/fe3bc0bfce1f#samples%2fjquery%2fjquery.d.ts for an example of a jQuery TypeScript declaration file that can be used with tools such as Visual Studio 2012 to provide additional code help and ensure that a string isn’t passed to a parameter that expects a number. Although declaration files certainly aren’t required, TypeScript’s support for declaration files makes it easier to catch issues upfront while working with existing libraries such as jQuery. In the future I expect TypeScript declaration files will be released for different HTML5 APIs such as canvas, local storage, and others as well as some of the more popular JavaScript libraries and frameworks. Getting Started with TypeScript To get started learning TypeScript visit the TypeScript Playground available at http://www.typescriptlang.org. Using the playground editor you can experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once it’s compiled. Here’s an example of the TypeScript playground in action:   One of the first things that may stand out to you about the code shown above is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container which helps tremendously with re-use and maintainability especially in enterprise-scale JavaScript applications. While you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes it quite easy especially if you come from an object-oriented programming background. An example of the Greeter class shown in the TypeScript Playground is shown next: class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } Looking through the code you’ll notice that static types can be defined on variables and parameters such as greeting: string, that constructors can be defined, and that functions can be defined such as greet(). The ability to define static types is a key feature of TypeScript (and where its name comes from) that can help identify bugs upfront before even running the code. Many types are supported including primitive types like string, number, bool, undefined, and null as well as object literals and more complex types such as HTMLInputElement (for an <input> tag). Custom types can be defined as well. The JavaScript output by compiling the TypeScript Greeter class (using an editor like Visual Studio, Sublime Text, or the tsc.exe compiler) is shown next: var Greeter = (function () { function Greeter(message) { this.greeting = message; } Greeter.prototype.greet = function () { return "Hello, " + this.greeting; }; return Greeter; })(); Notice that the code is using JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to take the variables and functions out of the global JavaScript scope. This is important feature that helps avoid naming collisions between variables and functions. In cases where you’d like to wrap a class in a naming container (similar to a namespace in C# or a package in Java) you can use TypeScript’s module keyword. The following code shows an example of wrapping an AcmeCorp module around the Greeter class. In order to create a new instance of Greeter the module name must now be used. This can help avoid naming collisions that may occur with the Greeter class.   module AcmeCorp { export class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } } var greeter = new AcmeCorp.Greeter("world"); In addition to being able to define custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript’s extends keyword. The following code shows an example of using inheritance to define two report objects:   class Report { name: string; constructor (name: string) { this.name = name; } print() { alert("Report: " + this.name); } } class FinanceReport extends Report { constructor (name: string) { super(name); } print() { alert("Finance Report: " + this.name); } getLineItems() { alert("5 line items"); } } var report = new FinanceReport("Month's Sales"); report.print(); report.getLineItems();   In this example a base Report class is defined that has a variable (name), a constructor that accepts a name parameter of type string, and a function named print(). The FinanceReport class inherits from Report by using TypeScript’s extends keyword. As a result, it automatically has access to the print() function in the base class. In this example the FinanceReport overrides the base class’s print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class using the super() call. TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects. The following code shows an example of an interface named Thing (from the TypeScript samples) and a class named Plane that implements the interface to drive consistency across the app. Notice that the Plane class includes intersect and normal as a result of implementing the interface.   interface Thing { intersect: (ray: Ray) => Intersection; normal: (pos: Vector) => Vector; surface: Surface; } class Plane implements Thing { normal: (pos: Vector) =>Vector; intersect: (ray: Ray) =>Intersection; constructor (norm: Vector, offset: number, public surface: Surface) { this.normal = function (pos: Vector) { return norm; } this.intersect = function (ray: Ray): Intersection { var denom = Vector.dot(norm, ray.dir); if (denom > 0) { return null; } else { var dist = (Vector.dot(norm, ray.start) + offset) / (-denom); return { thing: this, ray: ray, dist: dist }; } } } }   At first glance it doesn’t appear that the surface member is implemented in Plane but it’s actually included automatically due to the public surface: Surface parameter in the constructor. Adding public varName: Type to a constructor automatically adds a typed variable into the class without having to explicitly write the code as with normal and intersect. TypeScript has additional language features but defining static types and creating classes, modules, and interfaces are some of the key features it offers. So is TypeScript right for you and your applications? That’s a not a question that I or anyone else can answer for you. You’ll need to give it a spin to see what you think. In future posts I’ll discuss additional details about TypeScript and how it can be used with enterprise-scale JavaScript applications. In the meantime, I’m in the process of working with John Papa on a new Typescript course for Pluralsight that we hope to have out in December of 2012.

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • Algorithmically generating neon layers on pixel grid

    - by user190929
    In an attempt at a screensaver I am making, I am a fan of neo-like graphics, which, of course, look great against a black background. As I understand it, neon, graphically speaking, is essentially a gradient of a color, brightest in the center, and gets darker proceeding outward. Although, more accurate is similar, but separating it into tubes and glow. The tubes are mostly white, while the glow is where most of the color is seen. Well... the tubes could also be a light variant of the color, you could say. The glow is darker. Anyhow, my question is, how could you generate such things given an initial pattern of pixels that would be the tubes? For example, let's say I want to make a neon 'H'. I, via the libraries, can attain the rectangles of pixels which represent it, but I want to make it look neonized. How could I algorithmically achieve such an effect given a base tube shape and base color? EDIT: ok, I mistated that. Got a bit distracted. My purpose for this was similar to a neon effect, but not. Sorry about that. What I am looking for is something like this: Start with a pattern of pixels: [!][!][!][!][!][!][!][!] [!][!][O][!][!][!][!][!] [!][!][O][O][!][!][!][!] [!][!][!][!][O][!][!][!] [!][!][!][!][!][!][!][!] How to I find the U pixels? [!][E][E][E][!][!][!][!] [!][E][O][E][E][!][!][!] [!][E][O][O][E][E][!][!] [!][E][E][E][O][E][!][!] [!][!][!][E][E][E][!][!] Sorry if that looks bad.

    Read the article

  • How to build MVC Views that work with polymorphic domain model design?

    - by Johann de Swardt
    This is more of a "how would you do it" type of question. The application I'm working on is an ASP.NET MVC4 app using Razor syntax. I've got a nice domain model which has a few polymorphic classes, awesome to work with in the code, but I have a few questions regarding the MVC front-end. Views are easy to build for normal classes, but when it comes to the polymorphic ones I'm stuck on deciding how to implement them. The one (ugly) option is to build a page which handles the base type (eg. IContract) and has a bunch of if statements to check if we passed in a IServiceContract or ISupplyContract instance. Not pretty and very nasty to maintain. The other option is to build a view for each of these IContract child classes, breaking DRY principles completely. Don't like doing this for obvious reasons. Another option (also not great) is to split the view into chunks with partials and build partial views for each of the child types that are loaded into the main view for the base type, then deciding to show or hide the partial in a single if statement in the partial. Also messy. I've also been thinking about building a master page with sections for the fields that only occur in subclasses and to build views for each subclass referencing the master page. This looks like the least problematic solution? It will allow for fairly simple maintenance and it doesn't involve code duplication. What are your thoughts? Am I missing something obvious that will make our lives easier? Suggestions?

    Read the article

  • OO Design, how to model Tonal Harmony?

    - by David
    I have started to write a program in C++ 11 that would analyse chords, scales, and harmony. The biggest problem I am having in my design phase, is that the note 'C' is a note, a type of chord (Cmaj, Cmin, C7, etc), and a type of key (the key of Cmajor, Cminor). The same issue arises with intervals (minor 3rd, major 3rd). I am using a base class, Token, that is the base class for all 'symbols' in the program. so for example: class Token { public: typedef shared_ptr<Token> pointer_type; Token() {} virtual ~Token() {} }; class Command : public Token { public: Command() {} pointer_type execute(); } class Note : public Token; class Triad : public Token; class MajorTriad : public Triad; // CMajorTriad, etc class Key : public Token; class MinorKey : public Key; // Natural Minor, Harmonic minor,etc class Scale : public Token; As you can see, to create all the derived classes (CMajorTriad, C, CMajorScale, CMajorKey, etc) would quickly become ridiculously complex including all the other notes, as well as enharmonics. multiple inheritance would not work, ie: class C : public Note, Triad, Key, Scale class C, cannot be all of these things at the same time. It is contextual, also polymorphing with this will not work (how to determine which super methods to perform? calling every super class constructors should not happen here) Are there any design ideas or suggestions that people have to offer? I have not been able to find anything on google in regards to modelling tonal harmony from an OO perspective. There are just far too many relationships between all the concepts here.

    Read the article

  • 3ds Max error dialog: "Instancing not supported for this action"

    - by monsto
    "Instancing not supported for this action” is the dialog I get. My favorite part is that, according to google and yahoo, apparently i am the only person in the history of mankind to experience these words together in this order, let along get this message from Max. Thanks, autodesk, for putting this dialog in special for me! So I’ve created my model (nws) and was setting up a Skin Wrap. Selected "Face Deformation", added the base-skin for weight, checked “weight all points”. . . clicked “convert to skin” and got that dialog. My model doesn’t have a whole lot of elements to it, I had a left and right appendage that came from a base model (skyrim). so, i did a clonecopy of all 3 of my elements, just to be sure nothing was instanced… and VOILA! Same error message. the only other elements are an imported NIF mesh and skeleton. Any idea where this is coming from or how I can make it go away so that I can export my mesh?

    Read the article

  • How to fix E: Internal Error, No file name for libc6

    - by Loren Ramly
    How to fix E: Internal Error, No file name for libc6, Like that will show If I do: $ sudo apt-get upgrade or $ sudo apt-get install package This is example : $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn hplip hplip-data libdrm-dev libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libgrip0 libhpmud0 libkms1 libsane-hpaio libunity-2d-private0 libunity-core-5.0-5 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae printer-driver-hpcups printer-driver-hpijs unity unity-2d-common unity-2d-panel unity-2d-shell unity-2d-spread unity-common unity-services The following packages will be upgraded: alsa-base firefox firefox-globalmenu firefox-gnome-support firefox-locale-en icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-7-jre-jamvm libdbus-glib-1-2 libdbus-glib-1-dev libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libssl-dev libssl-doc libssl1.0.0 linux-sound-base openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openjdk-7-jdk openjdk-7-jre openjdk-7-jre-headless openjdk-7-jre-lib openssl sudo 27 upgraded, 0 newly installed, 0 to remove and 26 not upgraded. 3 not fully installed or removed. Need to get 0 B/126 MB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? y E: Internal Error, No file name for libc6 I have follow instruction from here E: Internal Error, No file name for libssl1.0.0 . Which do: sudo apt-get update sudo apt-get clean sudo apt-get install -fy sudo dpkg -i /var/cache/apt/archives/*.deb sudo dpkg --configure -a sudo apt-get install -fy sudo apt-get dist-upgrade But stuck with same error E: Internal Error, No file name for libc6 when do command sudo apt-get install -fy. And I've been looking on google, but have not been successful until now. Thanks.

    Read the article

  • Using an Apt Repository for Paid Software Updates

    - by Scott Warren
    I'm trying to determine a way to distribute software updates for a hosted/on-site web application that may have weekly and/or monthly updates. I don't want the customers who use the on-site product to have to worry about updating it manually I just want it to download and install automatically ala Google Chrome. I'm planning on providing an OVF file with Ubuntu and the software installed and configured. My first thought on how to distributed software is to create six Apt repositories/channels (not sure which would be better at this point) that will be accessed through SSH using keys so if a customer doesn't renew their subscription we can disable their account: Beta - Used internally on test data to check the package for major defects. Internal - Used internally on live data to check the package for defects (dog fooding stage). External 1 - Deployed to 1% of our user base (randomly selected) to check for defects. External 9 - Deployed to 9% of our user base (randomly selected) to check for defects. External 90 - Deployed to the remaining 90% of users. Hosted - Deployed to the hosted environment. It will take a sign off at each stage to move into the next repository in case problems are reported. My questions to the community are: Has anyone tried something like this before? Can anyone see a downside to this type of a procedure? Is there a better way?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >