Search Results

Search found 5671 results on 227 pages for 'sub tuts'.

Page 216/227 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Can't log in to GNOME after upgrade (raring -> saucy)

    - by x-yuri
    I've just upgraded my ubuntu (raring to saucy) and I now can't log in to GNOME. As opposed to virtual consoles (Ctrl-Alt-F1, for example). I set it up to log in automatically. But it asks for password now. I type in the password, press Enter, the screen blinks and here I am again at the login screen. Then I looked into /var/log/Xorg.0.log: [ 33.956] Initializing built-in extension DRI2 [ 33.956] (II) LoadModule: "glx" [ 33.956] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 33.956] (II) Module glx: vendor="X.Org Foundation" [ 33.956] compiled for 1.14.3, module version = 1.0.0 [ 33.956] ABI class: X.Org Server Extension, version 7.0 [ 33.956] (==) AIGLX enabled [ 33.956] Loading extension GLX [ 33.956] (==) Matched fglrx as autoconfigured driver 0 [ 33.956] (==) Matched ati as autoconfigured driver 1 [ 33.956] (==) Matched fglrx as autoconfigured driver 2 [ 33.956] (==) Matched ati as autoconfigured driver 3 [ 33.956] (==) Matched vesa as autoconfigured driver 4 [ 33.956] (==) Matched modesetting as autoconfigured driver 5 [ 33.956] (==) Matched fbdev as autoconfigured driver 6 [ 33.956] (==) Assigned the driver to the xf86ConfigLayout [ 33.956] (II) LoadModule: "fglrx" [ 33.957] (WW) Warning, couldn't open module fglrx [ 33.957] (II) UnloadModule: "fglrx" [ 33.957] (II) Unloading fglrx [ 33.957] (EE) Failed to load module "fglrx" (module does not exist, 0) [ 33.957] (II) LoadModule: "ati" [ 33.957] (WW) Warning, couldn't open module ati [ 33.957] (II) UnloadModule: "ati" [ 33.957] (II) Unloading ati [ 33.957] (EE) Failed to load module "ati" (module does not exist, 0) [ 33.957] (II) LoadModule: "vesa" [ 33.957] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 33.957] (II) Module vesa: vendor="X.Org Foundation" [ 33.957] compiled for 1.14.1, module version = 2.3.2 [ 33.957] Module class: X.Org Video Driver [ 33.957] ABI class: X.Org Video Driver, version 14.1 [ 33.957] (II) LoadModule: "modesetting" [ 33.957] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 33.957] (II) Module modesetting: vendor="X.Org Foundation" [ 33.957] compiled for 1.14.1, module version = 0.8.0 [ 33.957] Module class: X.Org Video Driver [ 33.957] ABI class: X.Org Video Driver, version 14.1 [ 33.957] (II) LoadModule: "fbdev" [ 33.957] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 33.958] (II) Module fbdev: vendor="X.Org Foundation" [ 33.958] compiled for 1.14.1, module version = 0.4.3 [ 33.958] Module class: X.Org Video Driver [ 33.958] ABI class: X.Org Video Driver, version 14.1 [ 33.958] (==) Matched fglrx as autoconfigured driver 0 [ 33.958] (==) Matched ati as autoconfigured driver 1 [ 33.958] (==) Matched fglrx as autoconfigured driver 2 [ 33.958] (==) Matched ati as autoconfigured driver 3 [ 33.958] (==) Matched vesa as autoconfigured driver 4 [ 33.958] (==) Matched modesetting as autoconfigured driver 5 [ 33.958] (==) Matched fbdev as autoconfigured driver 6 [ 33.958] (==) Assigned the driver to the xf86ConfigLayout [ 33.958] (II) LoadModule: "fglrx" [ 33.958] (WW) Warning, couldn't open module fglrx [ 33.958] (II) UnloadModule: "fglrx" [ 33.958] (II) Unloading fglrx [ 33.958] (EE) Failed to load module "fglrx" (module does not exist, 0) [ 33.958] (II) LoadModule: "ati" [ 33.958] (WW) Warning, couldn't open module ati [ 33.958] (II) UnloadModule: "ati" [ 33.958] (II) Unloading ati [ 33.958] (EE) Failed to load module "ati" (module does not exist, 0) [ 33.958] (II) LoadModule: "vesa" [ 33.958] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 33.958] (II) Module vesa: vendor="X.Org Foundation" [ 33.958] compiled for 1.14.1, module version = 2.3.2 [ 33.958] Module class: X.Org Video Driver [ 33.958] ABI class: X.Org Video Driver, version 14.1 [ 33.958] (II) UnloadModule: "vesa" [ 33.958] (II) Unloading vesa [ 33.958] (II) Failed to load module "vesa" (already loaded, 0) [ 33.958] (II) LoadModule: "modesetting" [ 33.959] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 33.959] (II) Module modesetting: vendor="X.Org Foundation" [ 33.959] compiled for 1.14.1, module version = 0.8.0 [ 33.959] Module class: X.Org Video Driver [ 33.959] ABI class: X.Org Video Driver, version 14.1 [ 33.959] (II) UnloadModule: "modesetting" [ 33.959] (II) Unloading modesetting [ 33.959] (II) Failed to load module "modesetting" (already loaded, 0) [ 33.959] (II) LoadModule: "fbdev" [ 33.959] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 33.959] (II) Module fbdev: vendor="X.Org Foundation" [ 33.959] compiled for 1.14.1, module version = 0.4.3 [ 33.959] Module class: X.Org Video Driver [ 33.959] ABI class: X.Org Video Driver, version 14.1 [ 33.959] (II) UnloadModule: "fbdev" [ 33.959] (II) Unloading fbdev [ 33.959] (II) Failed to load module "fbdev" (already loaded, 0) [ 33.959] (II) VESA: driver for VESA chipsets: vesa [ 33.959] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 33.959] (II) FBDEV: driver for framebuffer: fbdev [ 33.959] (++) using VT number 7 If I install fglrx, it reads: [ 37.152] Initializing built-in extension DRI2 [ 37.152] (II) LoadModule: "glx" [ 37.152] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/extensions/libglx.so [ 37.152] (II) Module glx: vendor="Advanced Micro Devices, Inc." [ 37.152] compiled for 6.9.0, module version = 1.0.0 [ 37.152] Loading extension GLX [ 37.153] (==) Matched fglrx as autoconfigured driver 0 [ 37.153] (==) Matched ati as autoconfigured driver 1 [ 37.153] (==) Matched vesa as autoconfigured driver 2 [ 37.153] (==) Matched modesetting as autoconfigured driver 3 [ 37.153] (==) Matched fbdev as autoconfigured driver 4 [ 37.153] (==) Assigned the driver to the xf86ConfigLayout [ 37.153] (II) LoadModule: "fglrx" [ 37.153] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/drivers/fglrx_drv.so [ 37.168] (II) Module fglrx: vendor="FireGL - AMD Technologies Inc." [ 37.168] compiled for 1.4.99.906, module version = 13.10.10 [ 37.168] Module class: X.Org Video Driver [ 37.168] (II) Loading sub module "fglrxdrm" [ 37.168] (II) LoadModule: "fglrxdrm" [ 37.168] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/linux/libfglrxdrm.so [ 37.169] (II) Module fglrxdrm: vendor="FireGL - AMD Technologies Inc." [ 37.169] compiled for 1.4.99.906, module version = 13.10.10 [ 37.169] (II) LoadModule: "ati" [ 37.169] (WW) Warning, couldn't open module ati [ 37.169] (II) UnloadModule: "ati" [ 37.169] (II) Unloading ati [ 37.169] (EE) Failed to load module "ati" (module does not exist, 0) [ 37.169] (II) LoadModule: "vesa" [ 37.169] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 37.169] (II) Module vesa: vendor="X.Org Foundation" [ 37.169] compiled for 1.14.1, module version = 2.3.2 [ 37.169] Module class: X.Org Video Driver [ 37.169] ABI class: X.Org Video Driver, version 14.1 [ 37.169] (II) LoadModule: "modesetting" [ 37.170] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 37.170] (II) Module modesetting: vendor="X.Org Foundation" [ 37.170] compiled for 1.14.1, module version = 0.8.0 [ 37.170] Module class: X.Org Video Driver [ 37.170] ABI class: X.Org Video Driver, version 14.1 [ 37.170] (II) LoadModule: "fbdev" [ 37.170] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 37.170] (II) Module fbdev: vendor="X.Org Foundation" [ 37.170] compiled for 1.14.1, module version = 0.4.3 [ 37.170] Module class: X.Org Video Driver [ 37.170] ABI class: X.Org Video Driver, version 14.1 [ 37.170] (==) Matched fglrx as autoconfigured driver 0 [ 37.170] (==) Matched ati as autoconfigured driver 1 [ 37.170] (==) Matched vesa as autoconfigured driver 2 [ 37.170] (==) Matched modesetting as autoconfigured driver 3 [ 37.170] (==) Matched fbdev as autoconfigured driver 4 [ 37.170] (==) Assigned the driver to the xf86ConfigLayout [ 37.170] (II) LoadModule: "fglrx" [ 37.170] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/modules/drivers/fglrx_drv.so [ 37.170] (II) Module fglrx: vendor="FireGL - AMD Technologies Inc." [ 37.170] compiled for 1.4.99.906, module version = 13.10.10 [ 37.170] Module class: X.Org Video Driver [ 37.170] (II) LoadModule: "ati" [ 37.170] (WW) Warning, couldn't open module ati [ 37.170] (II) UnloadModule: "ati" [ 37.171] (II) Unloading ati [ 37.171] (EE) Failed to load module "ati" (module does not exist, 0) [ 37.171] (II) LoadModule: "vesa" [ 37.171] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 37.171] (II) Module vesa: vendor="X.Org Foundation" [ 37.171] compiled for 1.14.1, module version = 2.3.2 [ 37.171] Module class: X.Org Video Driver [ 37.171] ABI class: X.Org Video Driver, version 14.1 [ 37.171] (II) UnloadModule: "vesa" [ 37.171] (II) Unloading vesa [ 37.171] (II) Failed to load module "vesa" (already loaded, 0) [ 37.171] (II) LoadModule: "modesetting" [ 37.171] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 37.171] (II) Module modesetting: vendor="X.Org Foundation" [ 37.171] compiled for 1.14.1, module version = 0.8.0 [ 37.171] Module class: X.Org Video Driver [ 37.171] ABI class: X.Org Video Driver, version 14.1 [ 37.171] (II) UnloadModule: "modesetting" [ 37.171] (II) Unloading modesetting [ 37.171] (II) Failed to load module "modesetting" (already loaded, 0) [ 37.171] (II) LoadModule: "fbdev" [ 37.171] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 37.171] (II) Module fbdev: vendor="X.Org Foundation" [ 37.171] compiled for 1.14.1, module version = 0.4.3 [ 37.171] Module class: X.Org Video Driver [ 37.171] ABI class: X.Org Video Driver, version 14.1 [ 37.171] (II) UnloadModule: "fbdev" [ 37.171] (II) Unloading fbdev [ 37.171] (II) Failed to load module "fbdev" (already loaded, 0) [ 37.171] (II) AMD Proprietary Linux Driver Version Identifier:13.10.10 [ 37.171] (II) AMD Proprietary Linux Driver Release Identifier: UNSUPPORTED-13.101 [ 37.171] (II) AMD Proprietary Linux Driver Build Date: May 23 2013 15:49:35 [ 37.171] (II) VESA: driver for VESA chipsets: vesa [ 37.171] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 37.171] (II) FBDEV: driver for framebuffer: fbdev [ 37.171] (++) using VT number 7 I did more installing/removing packages than that. There were a moment when it said: (EE) Failed to load /usr/lib64/xorg/modules/libglamoregl.so: /usr/lib64/xorg/modules/libglamoregl.so: undefined symbol: _glapi_tls_Context Also there is init: not found in ~/.xsession-errors: /usr/sbin/lightdm-session: 5: exec: init: not found Actually, I'm out of ideas. What about you? :)

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Security in Software

    The term security has many meanings based on the context and perspective in which it is used. Security from the perspective of software/system development is the continuous process of maintaining confidentiality, integrity, and availability of a system, sub-system, and system data. This definition at a very high level can be restated as the following: Computer security is a continuous process dealing with confidentiality, integrity, and availability on multiple layers of a system. Key Aspects of Software Security Integrity Confidentiality Availability Integrity within a system is the concept of ensuring only authorized users can only manipulate information through authorized methods and procedures. An example of this can be seen in a simple lead management application.  If the business decided to allow each sales member to only update their own leads in the system and sales managers can update all leads in the system then an integrity violation would occur if a sales member attempted to update someone else’s leads. An integrity violation occurs when a team member attempts to update someone else’s lead because it was not entered by the sales member.  This violates the business rule that leads can only be update by the originating sales member. Confidentiality within a system is the concept of preventing unauthorized access to specific information or tools.  In a perfect world the knowledge of the existence of confidential information/tools would be unknown to all those who do not have access. When this this concept is applied within the context of an application only the authorized information/tools will be available. If we look at the sales lead management system again, leads can only be updated by originating sales members. If we look at this rule then we can say that all sales leads are confidential between the system and the sales person who entered the lead in to the system. The other sales team members would not need to know about the leads let alone need to access it. Availability within a system is the concept of authorized users being able to access the system. A real world example can be seen again from the lead management system. If that system was hosted on a web server then IP restriction can be put in place to limit access to the system based on the requesting IP address. If in this example all of the sales members where accessing the system from the 192.168.1.23 IP address then removing access from all other IPs would be need to ensure that improper access to the system is prevented while approved users can access the system from an authorized location. In essence if the requesting user is not coming from an authorized IP address then the system will appear unavailable to them. This is one way of controlling where a system is accessed. Through the years several design principles have been identified as being beneficial when integrating security aspects into a system. These principles in various combinations allow for a system to achieve the previously defined aspects of security based on generic architectural models. Security Design Principles Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation Privilege Least Common Mechanism Psychological Acceptability Defense in Depth Least Privilege Design PrincipleThe Least Privilege design principle requires a minimalistic approach to granting user access rights to specific information and tools. Additionally, access rights should be time based as to limit resources access bound to the time needed to complete necessary tasks. The implications of granting access beyond this scope will allow for unnecessary access and the potential for data to be updated out of the approved context. The assigning of access rights will limit system damaging attacks from users whether they are intentional or not. This principle attempts to limit data changes and prevents potential damage from occurring by accident or error by reducing the amount of potential interactions with a resource. Fail-Safe Defaults Design PrincipleThe Fail-Safe Defaults design principle pertains to allowing access to resources based on granted access over access exclusion. This principle is a methodology for allowing resources to be accessed only if explicit access is granted to a user. By default users do not have access to any resources until access has been granted. This approach prevents unauthorized users from gaining access to resource until access is given. Economy of Mechanism Design PrincipleThe Economy of mechanism design principle requires that systems should be designed as simple and small as possible. Design and implementation errors result in unauthorized access to resources that would not be noticed during normal use. Complete Mediation Design PrincipleThe Complete Mediation design principle states that every access to every resource must be validated for authorization. Open Design Design PrincipleThe Open Design Design Principle is a concept that the security of a system and its algorithms should not be dependent on secrecy of its design or implementation Separation Privilege Design PrincipleThe separation privilege design principle requires that all resource approved resource access attempts be granted based on more than a single condition. For example a user should be validated for active status and has access to the specific resource. Least Common Mechanism Design PrincipleThe Least Common Mechanism design principle declares that mechanisms used to access resources should not be shared. Psychological Acceptability Design PrincipleThe Psychological Acceptability design principle refers to security mechanisms not make resources more difficult to access than if the security mechanisms were not present Defense in Depth Design PrincipleThe Defense in Depth design principle is a concept of layering resource access authorization verification in a system reduces the chance of a successful attack. This layered approach to resource authorization requires unauthorized users to circumvent each authorization attempt to gain access to a resource. When designing a system that requires meeting a security quality attribute architects need consider the scope of security needs and the minimum required security qualities. Not every system will need to use all of the basic security design principles but will use one or more in combination based on a company’s and architect’s threshold for system security because the existence of security in an application adds an additional layer to the overall system and can affect performance. That is why the definition of minimum security acceptably is need when a system is design because this quality attributes needs to be factored in with the other system quality attributes so that the system in question adheres to all qualities based on the priorities of the qualities. Resources: Barnum, Sean. Gegick, Michael. (2005). Least Privilege. Retrieved on August 28, 2011 from https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/351-BSI.html Saltzer, Jerry. (2011). BASIC PRINCIPLES OF INFORMATION PROTECTION. Retrieved on August 28, 2011 from  http://web.mit.edu/Saltzer/www/publications/protection/Basic.html Barnum, Sean. Gegick, Michael. (2005). Defense in Depth. Retrieved on August 28, 2011 from  https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/347-BSI.html Bertino, Elisa. (2005). Design Principles for Security. Retrieved on August 28, 2011 from  http://homes.cerias.purdue.edu/~bhargav/cs526/security-9.pdf

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to free up space on /boot? [closed]

    - by Phrogz
    Possible Duplicate: Free up more space on /boot I logged onto my server today to find the message: => /boot is using 98.9% of 91MB When I look at /boot I see that it is indeed very low on space, and has old-kernel files in it: phrogz@planar:~$ df -h /boot Filesystem Size Used Avail Use% Mounted on /dev/sda1 92M 54M 33M 63% /boot phrogz@planar:~$ la /boot total 81880 drwxr-xr-x 4 root root 3072 2011-12-02 06:26 ./ drwxr-xr-x 22 root root 4096 2011-09-29 06:37 ../ -rw-r--r-- 1 root root 646419 2011-03-01 19:02 abi-2.6.32-30-server -rw-r--r-- 1 root root 646419 2011-04-08 17:07 abi-2.6.32-31-server -rw-r--r-- 1 root root 646454 2011-04-20 16:53 abi-2.6.32-32-server -rw-r--r-- 1 root root 646454 2011-07-29 16:07 abi-2.6.32-33-server -rw-r--r-- 1 root root 646710 2011-09-13 18:00 abi-2.6.32-34-server -rw-r--r-- 1 root root 646820 2011-10-11 11:10 abi-2.6.32-35-server -rw-r--r-- 1 root root 110687 2011-03-01 19:02 config-2.6.32-30-server -rw-r--r-- 1 root root 110676 2011-04-08 17:07 config-2.6.32-31-server -rw-r--r-- 1 root root 110687 2011-04-20 16:53 config-2.6.32-32-server -rw-r--r-- 1 root root 110687 2011-07-29 16:07 config-2.6.32-33-server -rw-r--r-- 1 root root 110687 2011-09-13 18:00 config-2.6.32-34-server -rw-r--r-- 1 root root 110687 2011-10-11 11:10 config-2.6.32-35-server drwxr-xr-x 3 root root 6144 2011-12-02 06:26 grub/ -rw-r--r-- 1 root root 8258196 2011-05-18 11:58 initrd.img-2.6.32-30-server -rw-r--r-- 1 root root 8259568 2011-05-23 20:24 initrd.img-2.6.32-31-server -rw-r--r-- 1 root root 8257374 2011-05-30 07:47 initrd.img-2.6.32-32-server -rw-r--r-- 1 root root 8287489 2011-08-10 06:37 initrd.img-2.6.32-33-server -rw-r--r-- 1 root root 8288075 2011-09-29 06:37 initrd.img-2.6.32-34-server drwx------ 2 root root 12288 2011-05-18 11:46 lost+found/ -rw-r--r-- 1 root root 160280 2010-03-23 03:40 memtest86+.bin -rw-r--r-- 1 root root 2179117 2011-03-01 19:02 System.map-2.6.32-30-server -rw-r--r-- 1 root root 2179628 2011-04-08 17:07 System.map-2.6.32-31-server -rw-r--r-- 1 root root 2178240 2011-04-20 16:53 System.map-2.6.32-32-server -rw-r--r-- 1 root root 2178382 2011-07-29 16:07 System.map-2.6.32-33-server -rw-r--r-- 1 root root 2178952 2011-09-13 18:00 System.map-2.6.32-34-server -rw-r--r-- 1 root root 2179333 2011-10-11 11:10 System.map-2.6.32-35-server -rw-r--r-- 1 root root 1336 2011-03-01 19:08 vmcoreinfo-2.6.32-30-server -rw-r--r-- 1 root root 1336 2011-04-08 17:13 vmcoreinfo-2.6.32-31-server -rw-r--r-- 1 root root 1336 2011-04-20 16:54 vmcoreinfo-2.6.32-32-server -rw-r--r-- 1 root root 1336 2011-07-29 16:08 vmcoreinfo-2.6.32-33-server -rw-r--r-- 1 root root 1336 2011-09-13 18:03 vmcoreinfo-2.6.32-34-server -rw-r--r-- 1 root root 1336 2011-10-11 11:11 vmcoreinfo-2.6.32-35-server -rw-r--r-- 1 root root 4111552 2011-03-01 19:02 vmlinuz-2.6.32-30-server -rw-r--r-- 1 root root 4113344 2011-04-08 17:07 vmlinuz-2.6.32-31-server -rw-r--r-- 1 root root 4106528 2011-04-20 16:53 vmlinuz-2.6.32-32-server -rw-r--r-- 1 root root 4107648 2011-07-29 16:07 vmlinuz-2.6.32-33-server -rw-r--r-- 1 root root 4108960 2011-09-13 18:00 vmlinuz-2.6.32-34-server -rw-r--r-- 1 root root 4111040 2011-10-11 11:10 vmlinuz-2.6.32-35-server I was able to find the old kernel packages like so: phrogz@planar:/boot$ dpkg -l | grep linux-image ii linux-image-2.6.32-30-server 2.6.32-30.59 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-31-server 2.6.32-31.61 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-32-server 2.6.32-32.62 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-33-server 2.6.32-33.72 Linux kernel image for version 2.6.32 on x86 ii linux-image-2.6.32-34-server 2.6.32-34.77 Linux kernel image for version 2.6.32 on x86 iF linux-image-2.6.32-35-server 2.6.32-35.78 Linux kernel image for version 2.6.32 on x86 iU linux-image-server 2.6.32.36.42 Linux kernel image on Server Equipment. …and I can see that many of them are older than my current image: phrogz@planar:/boot$ uname -a Linux planar 2.6.32-34-server #77-Ubuntu SMP Tue Sep 13 20:54:38 UTC 2011 x86_64 GNU/Linux However, I can't actually remove them due to an unmet dependency: phrogz@planar:/boot$ sudo apt-get --purge remove linux-image-2.6.32-30-server Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: linux-image-server: Depends: linux-image-2.6.32-36-server but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). But I can't fix the dependency (presumably due to low disk space): phrogz@planar:/boot$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: liblcms1 linux-headers-2.6.32-32-server libnspr4-0d linux-headers-2.6.32-33-server linux-headers-2.6.32-32 linux-headers-2.6.32-33 linux-headers-2.6.32-34 libcups2 tzdata-java libjpeg62 linux-headers-2.6.32-34-server libavahi-client3 ca-certificates-java libnss3-1d Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-36-server Suggested packages: fdutils linux-doc-2.6.32 linux-source-2.6.32 linux-tools The following NEW packages will be installed: linux-image-2.6.32-36-server 0 upgraded, 1 newly installed, 0 to remove and 8 not upgraded. 3 not fully installed or removed. Need to get 0B/31.8MB of archives. After this operation, 128MB of additional disk space will be used. Do you want to continue [Y/n]? (Reading database ... 145200 files and directories currently installed.) Unpacking linux-image-2.6.32-36-server (from .../linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb) ... Done. dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb (--unpack): failed in buffer_write(fd) (10, ret=-1): backend dpkg-deb during `./boot/vmlinuz-2.6.32-36-server': No space left on device dpkg-deb: subprocess paste killed by signal (Broken pipe) Running postrm hook script /usr/sbin/update-grub. Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.32-35-server Found linux image: /boot/vmlinuz-2.6.32-34-server Found initrd image: /boot/initrd.img-2.6.32-34-server Found linux image: /boot/vmlinuz-2.6.32-33-server Found initrd image: /boot/initrd.img-2.6.32-33-server Found linux image: /boot/vmlinuz-2.6.32-32-server Found initrd image: /boot/initrd.img-2.6.32-32-server Found linux image: /boot/vmlinuz-2.6.32-31-server Found initrd image: /boot/initrd.img-2.6.32-31-server Found linux image: /boot/vmlinuz-2.6.32-30-server Found initrd image: /boot/initrd.img-2.6.32-30-server Found memtest86+ image: /memtest86+.bin done Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-36-server_2.6.32-36.79_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) How do I free up space on /boot so that I can fix my dependencies? Should I just delete the files manually? And then, should I resize my /boot to be larger, so this doesn't happen again? If so, how? If not, what maintenance should I be running regularly to prevent the accumulation of this cruft?

    Read the article

  • Package management fails in update-manager with gzip problems and compilation errors. U12.04LTS

    - by HarveyP
    Similar to but not the same as Package management system corrupted. Cannot install or remove packages. U12.04LTS (an earlier problem) with package management system. Followed all of L. D. James suggestions in his answer to no avail. This time as well as the gzip error I am also getting compilation errors. The difference may be due to a lack of compilation in my earlier problem so it may be the same error. The packages concerned are enumerated in the output from update-manager below. Also included below that is the output from apt-get -f install apt-get autoremove gives same output. Tried update without SSL updates - 9 to install and got "Unhandled Error in aptdaemon". Output number 3 below. One at a time - output 4 - is for firefox, first in the list of packages. Falls over at libssl1.0.0 despite deselection of it from update ... Tried apt-get install --reinstall dpkg which succeeded, apt-get install --reinstall tar and apt-get install --reinstall gzip both of which failed at libssl1.0.0 as ever. (as suggested by Subv3rsion elsewhere in this forum) Now cannot apt-get update with complete success even after changing server and apt-get clean - output number 5 below ... 1). Output from update-manager The following packages will be upgraded:<> firefox firefox-globalmenu firefox-locale-en libavcodec-extra-53 libavformat53 libavutil-extra-51 libjson0 libpostproc52 libssl1.0.0 libswscale2 openssl 11 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.<br> Need to get 0 B/46.5 MB of archives. After this operation, 1,416 kB of additional disk space will be used.<br> Do you want to continue [Y/n]? y debconf: Perl may be unconfigured (Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at (eval 1) line 3. BEGIN failed--compilation aborted at (eval 1) line 3. ) -- aborting (Reading database ... 160575 files and directories currently installed.) Preparing to replace libssl1.0.0 1.0.1-4ubuntu5.14 (using .../libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb) ... Unpacking replacement libssl1.0.0 ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: data error' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb (--unpack):<br> subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports has already been reached Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at /usr/share/perl5/Debconf/Db.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Db.pm line 7. Compilation failed in require at /usr/share/debconf/frontend line 6. BEGIN failed--compilation aborted at /usr/share/debconf/frontend line 6. dpkg: error whale cleanang up: subprgcess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 2). Output from install -f harveyp@harveyp:~$ sudo apt-get -f install [sudo] password for harveyp: Reading package lists... Done Building dependency tree Reading state information... Done 0 to upgrade, 0 to newly install, 0 to remove and 11 not to upgrade. 1 not fully installed or removed.<br> After this operation, 0 B of additional disk space will be used. E: Internal Error, No file name for libssl1.0.0 3). Unhandled error from aptdaemon Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1045, in _simulate trans.unauthenticated = self.__simulate(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1160, in __simulate unauthenticated = self._get_unauthenticated() File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 347, in _get_unauthenticated for pkg in self._iterate_packages(): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1356, in _iterate_packages for enum, pkg in enumerate(self._cache): File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 216, in __iter__ yield self[pkgname] File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 201, in __getitem__ pkg = self._weakref[key] = Package(self, self._cache[key]) KeyError: 'librqrcode-rubq-doc 4). output from update of firefox installArchives() failed: Error in function: < Setting up libssl1.0.0 (1.0.1-4ubuntu5.14) ... Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. 5. output from apt-get update ...snip ... Hit http://ubuntu-archive.mirrors.free.org precise-security/multiverse Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/restricted Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/universe Translation-en Fetched 368 kB in 6s (59.5 kB/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/ubuntu-archive.mirrors.free.org_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • PowerShell Script to Enumerate SharePoint 2010 or 2013 Permissions and Active Directory Group Membership

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2013/07/01/powershell-script-to-enumerate-sharepoint-2010-or-2013-permissions-and.aspx   In this post I will present a script to enumerate SharePoint 2010 or 2013 permissions across the entire farm down to the site (SPWeb) level.  As a bonus this script also recursively expands the membership of any Active Directory (AD) group including nested groups which you wouldn’t be able to find through the SharePoint UI.   History     Back in 2009 (over 4 years ago now) I published one my most read blog posts about enumerating SharePoint 2007 permissions.  I finally got around to updating that script to remove deprecated APIs, supporting the SharePoint 2010 commandlets, and fixing a few bugs.  There are 2 things that script did that I had to remove due to major architectural or procedural changes in the script. Indenting the XML output Ability to search for a specific user    I plan to add back the ability to search for a specific user but wanted to get this version published first.  As for indenting the XML that could be added but would take some effort.  If there is user demand for it (let me know in the comments or email me using the contact button at top of blog) I’ll move it up in priorities.    As a side note you may also notice that I’m not using the Active Directory commandlets.  This was a conscious decision since not all environments have them available.  Instead I’m relying on the older [ADSI] type accelerator and APIs.  It does add a significant amount of code to the script but it is necessary for compatibility.  Hopefully in a few years if I need to update again I can remove that legacy code.   Solution    Below is the script to enumerate SharePoint 2010 and 2013 permissions down to site level.  You can also download it from my SkyDrive account or my posting on the TechNet Script Center Repository. SkyDrive TechNet Script Center Repository http://gallery.technet.microsoft.com/scriptcenter/Enumerate-SharePoint-2010-35976bdb   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ########################################################### #DisplaySPWebApp8.ps1 # #Author: Brian T. Jackett #Last Modified Date: 2013-07-01 # #Traverse the entire web app site by site to display # hierarchy and users with permissions to site. ########################################################### function Expand-ADGroupMembership {     Param     (         [Parameter(Mandatory=$true,                    Position=0)]         [string]         $ADGroupName,         [Parameter(Position=1)]         [string]         $RoleBinding     )     Process     {         $roleBindingText = ""         if(-not [string]::IsNullOrEmpty($RoleBinding))         {             $roleBindingText = " RoleBindings=`"$roleBindings`""         }         Write-Output "<ADGroup Name=`"$($ADGroupName)`"$roleBindingText>"         $domain = $ADGroupName.substring(0, $ADGroupName.IndexOf("\") + 1)         $groupName = $ADGroupName.Remove(0, $ADGroupName.IndexOf("\") + 1)                                     #BEGIN - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         #http://www.microsoft.com/technet/scriptcenter/scripts/powershell/search/users/srch106.mspx         #GET AD GROUP FROM DIRECTORY SERVICES SEARCH         $strFilter = "(&(objectCategory=Group)(name="+($groupName)+"))"         $objDomain = New-Object System.DirectoryServices.DirectoryEntry         $objSearcher = New-Object System.DirectoryServices.DirectorySearcher         $objSearcher.SearchRoot = $objDomain         $objSearcher.Filter = $strFilter         # specify properties to be returned         $colProplist = ("name","member","objectclass")         foreach ($i in $colPropList)         {             $catcher = $objSearcher.PropertiesToLoad.Add($i)         }         $colResults = $objSearcher.FindAll()         #END - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         foreach ($objResult in $colResults)         {             if($objResult.Properties["Member"] -ne $null)             {                 foreach ($member in $objResult.Properties["Member"])                 {                     $indMember = [adsi] "LDAP://$member"                     $fullMemberName = $domain + ($indMember.Name)                                         #if($indMember["objectclass"]                         # if child AD group continue down chain                         if(($indMember | Select-Object -ExpandProperty objectclass) -contains "group")                         {                             Expand-ADGroupMembership -ADGroupName $fullMemberName                         }                         elseif(($indMember | Select-Object -ExpandProperty objectclass) -contains "user")                         {                             Write-Output "<ADUser>$fullMemberName</ADUser>"                         }                 }             }         }                 Write-Output "</ADGroup>"     } } #end Expand-ADGroupMembership # main portion of script if((Get-PSSnapin -Name microsoft.sharepoint.powershell) -eq $null) {     Add-PSSnapin Microsoft.SharePoint.PowerShell } $farm = Get-SPFarm Write-Output "<Farm Guid=`"$($farm.Id)`">" $webApps = Get-SPWebApplication foreach($webApp in $webApps) {     Write-Output "<WebApplication URL=`"$($webApp.URL)`" Name=`"$($webApp.Name)`">"     foreach($site in $webApp.Sites)     {         Write-Output "<SiteCollection URL=`"$($site.URL)`">"                 foreach($web in $site.AllWebs)         {             Write-Output "<Site URL=`"$($web.URL)`">"             # if site inherits permissions from parent then stop processing             if($web.HasUniqueRoleAssignments -eq $false)             {                 Write-Output "<!-- Inherits role assignments from parent -->"             }             # else site has unique permissions             else             {                 foreach($assignment in $web.RoleAssignments)                 {                     if(-not [string]::IsNullOrEmpty($assignment.Member.Xml))                     {                         $roleBindings = ($assignment.RoleDefinitionBindings | Select-Object -ExpandProperty name) -join ","                         # check if assignment is SharePoint Group                         if($assignment.Member.XML.StartsWith('<Group') -eq "True")                         {                             Write-Output "<SPGroup Name=`"$($assignment.Member.Name)`" RoleBindings=`"$roleBindings`">"                             foreach($SPGroupMember in $assignment.Member.Users)                             {                                 # if SharePoint group member is an AD Group                                 if($SPGroupMember.IsDomainGroup)                                 {                                     Expand-ADGroupMembership -ADGroupName $SPGroupMember.Name                                 }                                 # else SharePoint group member is an AD User                                 else                                 {                                     # remove claim portion of user login                                     #Write-Output "<ADUser>$($SPGroupMember.UserLogin.Remove(0,$SPGroupMember.UserLogin.IndexOf("|") + 1))</ADUser>"                                     Write-Output "<ADUser>$($SPGroupMember.UserLogin)</ADUser>"                                 }                             }                             Write-Output "</SPGroup>"                         }                         # else an indivdually listed AD group or user                         else                         {                             if($assignment.Member.IsDomainGroup)                             {                                 Expand-ADGroupMembership -ADGroupName $assignment.Member.Name -RoleBinding $roleBindings                             }                             else                             {                                 # remove claim portion of user login                                 #Write-Output "<ADUser>$($assignment.Member.UserLogin.Remove(0,$assignment.Member.UserLogin.IndexOf("|") + 1))</ADUser>"                                                                 Write-Output "<ADUser RoleBindings=`"$roleBindings`">$($assignment.Member.UserLogin)</ADUser>"                             }                         }                     }                 }             }             Write-Output "</Site>"             $web.Dispose()         }         Write-Output "</SiteCollection>"         $site.Dispose()     }     Write-Output "</WebApplication>" } Write-Output "</Farm>"      The output from the script can be sent to an XML which you can then explore using the [XML] type accelerator.  This lets you explore the XML structure however you see fit.  See the screenshot below for an example.      If you do view the XML output through a text editor (Notepad++ for me) notice the format.  Below we see a SharePoint site that has a SharePoint group Demo Members with Edit permissions assigned.  Demo Members has an AD group corp\developers as a member.  corp\developers has a child AD group called corp\DevelopersSub with 1 AD user in that sub group.  As you can see the script recursively expands the AD hierarchy.   Conclusion    It took me 4 years to finally update this script but I‘m happy to get this published.  I was able to fix a number of errors and smooth out some rough edges.  I plan to develop this into a more full fledged tool over the next year with more features and flexibility (copy permissions, search for individual user or group, optional enumerate lists / items, etc.).  If you have any feedback, feature requests, or issues running it please let me know.  Enjoy the script!         -Frog Out

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • No GLX on Intel card with multiseat with additional nVidia card

    - by MeanEYE
    I have multiseat configured and my Xorg has 2 server layouts. One is for nVidia card and other is for Intel card. They both work, but display server assigned to Intel card doesn't have hardware acceleration since DRI and GLX module being used is from nVidia driver. So my question is, can I configure layouts somehow to use right DRI and GLX with each card? My Xorg.conf: Section "ServerLayout" Identifier "Default" Screen 0 "Screen0" 0 0 Option "Xinerama" "0" EndSection Section "ServerLayout" Identifier "TV" Screen 0 "Screen1" 0 0 Option "Xinerama" "0" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL E198WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 610" EndSection Section "Device" Identifier "Device1" Driver "intel" BusID "PCI:0:2:0" Option "AccelMethod" "uxa" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-1" Option "metamodes" "DFP-0: nvidia-auto-select +1440+0, DFP-1: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Log file for Intel: [ 18.239] X.Org X Server 1.13.0 Release Date: 2012-09-05 [ 18.239] X Protocol Version 11, Revision 0 [ 18.239] Build Operating System: Linux 2.6.24-32-xen x86_64 Ubuntu [ 18.239] Current Operating System: Linux bytewiper 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 [ 18.239] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.5.0-18-generic root=UUID=fc0616fd-f212-4846-9241-ba4a492f0513 ro quiet splash [ 18.239] Build Date: 20 September 2012 11:55:20AM [ 18.239] xorg-server 2:1.13.0+git20120920.70e57668-0ubuntu0ricotz (For technical support please see http://www.ubuntu.com/support) [ 18.239] Current version of pixman: 0.26.0 [ 18.239] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 18.239] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 18.239] (==) Log file: "/var/log/Xorg.1.log", Time: Wed Nov 21 18:32:14 2012 [ 18.239] (==) Using config file: "/etc/X11/xorg.conf" [ 18.239] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 18.239] (++) ServerLayout "TV" [ 18.239] (**) |-->Screen "Screen1" (0) [ 18.239] (**) | |-->Monitor "Monitor1" [ 18.240] (**) | |-->Device "Device1" [ 18.240] (**) Option "Xinerama" "0" [ 18.240] (==) Automatically adding devices [ 18.240] (==) Automatically enabling devices [ 18.240] (==) Automatically adding GPU devices [ 18.240] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist. [ 18.240] Entry deleted from font path. [ 18.240] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 18.240] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 18.240] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 18.240] (II) Loader magic: 0x7f6917944c40 [ 18.240] (II) Module ABI versions: [ 18.240] X.Org ANSI C Emulation: 0.4 [ 18.240] X.Org Video Driver: 13.0 [ 18.240] X.Org XInput driver : 18.0 [ 18.240] X.Org Server Extension : 7.0 [ 18.240] (II) config/udev: Adding drm device (/dev/dri/card0) [ 18.241] (--) PCI: (0:0:2:0) 8086:0152:1043:84ca rev 9, Mem @ 0xf7400000/4194304, 0xd0000000/268435456, I/O @ 0x0000f000/64 [ 18.241] (--) PCI:*(0:1:0:0) 10de:104a:1458:3546 rev 161, Mem @ 0xf6000000/16777216, 0xe0000000/134217728, 0xe8000000/33554432, I/O @ 0x0000e000/128, BIOS @ 0x????????/524288 [ 18.241] (II) Open ACPI successful (/var/run/acpid.socket) [ 18.241] Initializing built-in extension Generic Event Extension [ 18.241] Initializing built-in extension SHAPE [ 18.241] Initializing built-in extension MIT-SHM [ 18.241] Initializing built-in extension XInputExtension [ 18.241] Initializing built-in extension XTEST [ 18.241] Initializing built-in extension BIG-REQUESTS [ 18.241] Initializing built-in extension SYNC [ 18.241] Initializing built-in extension XKEYBOARD [ 18.241] Initializing built-in extension XC-MISC [ 18.241] Initializing built-in extension SECURITY [ 18.241] Initializing built-in extension XINERAMA [ 18.241] Initializing built-in extension XFIXES [ 18.241] Initializing built-in extension RENDER [ 18.241] Initializing built-in extension RANDR [ 18.241] Initializing built-in extension COMPOSITE [ 18.241] Initializing built-in extension DAMAGE [ 18.241] Initializing built-in extension MIT-SCREEN-SAVER [ 18.241] Initializing built-in extension DOUBLE-BUFFER [ 18.241] Initializing built-in extension RECORD [ 18.241] Initializing built-in extension DPMS [ 18.241] Initializing built-in extension X-Resource [ 18.241] Initializing built-in extension XVideo [ 18.241] Initializing built-in extension XVideo-MotionCompensation [ 18.241] Initializing built-in extension XFree86-VidModeExtension [ 18.241] Initializing built-in extension XFree86-DGA [ 18.241] Initializing built-in extension XFree86-DRI [ 18.241] Initializing built-in extension DRI2 [ 18.241] (II) LoadModule: "glx" [ 18.241] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 18.247] (II) Module glx: vendor="NVIDIA Corporation" [ 18.247] compiled for 4.0.2, module version = 1.0.0 [ 18.247] Module class: X.Org Server Extension [ 18.247] (II) NVIDIA GLX Module 310.19 Thu Nov 8 01:12:43 PST 2012 [ 18.247] Loading extension GLX [ 18.247] (II) LoadModule: "intel" [ 18.248] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 18.248] (II) Module intel: vendor="X.Org Foundation" [ 18.248] compiled for 1.13.0, module version = 2.20.13 [ 18.248] Module class: X.Org Video Driver [ 18.248] ABI class: X.Org Video Driver, version 13.0 [ 18.248] (II) intel: Driver for Intel Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, B43, Clarkdale, Arrandale, Sandybridge Desktop (GT1), Sandybridge Desktop (GT2), Sandybridge Desktop (GT2+), Sandybridge Mobile (GT1), Sandybridge Mobile (GT2), Sandybridge Mobile (GT2+), Sandybridge Server, Ivybridge Mobile (GT1), Ivybridge Mobile (GT2), Ivybridge Desktop (GT1), Ivybridge Desktop (GT2), Ivybridge Server, Ivybridge Server (GT2), Haswell Desktop (GT1), Haswell Desktop (GT2), Haswell Desktop (GT2+), Haswell Mobile (GT1), Haswell Mobile (GT2), Haswell Mobile (GT2+), Haswell Server (GT1), Haswell Server (GT2), Haswell Server (GT2+), Haswell SDV Desktop (GT1), Haswell SDV Desktop (GT2), Haswell SDV Desktop (GT2+), Haswell SDV Mobile (GT1), Haswell SDV Mobile (GT2), Haswell SDV Mobile (GT2+), Haswell SDV Server (GT1), Haswell SDV Server (GT2), Haswell SDV Server (GT2+), Haswell ULT Desktop (GT1), Haswell ULT Desktop (GT2), Haswell ULT Desktop (GT2+), Haswell ULT Mobile (GT1), Haswell ULT Mobile (GT2), Haswell ULT Mobile (GT2+), Haswell ULT Server (GT1), Haswell ULT Server (GT2), Haswell ULT Server (GT2+), Haswell CRW Desktop (GT1), Haswell CRW Desktop (GT2), Haswell CRW Desktop (GT2+), Haswell CRW Mobile (GT1), Haswell CRW Mobile (GT2), Haswell CRW Mobile (GT2+), Haswell CRW Server (GT1), Haswell CRW Server (GT2), Haswell CRW Server (GT2+), ValleyView PO board [ 18.248] (++) using VT number 8 [ 18.593] (II) intel(0): using device path '/dev/dri/card0' [ 18.593] (**) intel(0): Depth 24, (--) framebuffer bpp 32 [ 18.593] (==) intel(0): RGB weight 888 [ 18.593] (==) intel(0): Default visual is TrueColor [ 18.593] (**) intel(0): Option "AccelMethod" "uxa" [ 18.593] (--) intel(0): Integrated Graphics Chipset: Intel(R) Ivybridge Desktop (GT1) [ 18.593] (**) intel(0): Relaxed fencing enabled [ 18.593] (**) intel(0): Wait on SwapBuffers? enabled [ 18.593] (**) intel(0): Triple buffering? enabled [ 18.593] (**) intel(0): Framebuffer tiled [ 18.593] (**) intel(0): Pixmaps tiled [ 18.593] (**) intel(0): 3D buffers tiled [ 18.593] (**) intel(0): SwapBuffers wait enabled ... [ 20.312] (II) Module fb: vendor="X.Org Foundation" [ 20.312] compiled for 1.13.0, module version = 1.0.0 [ 20.312] ABI class: X.Org ANSI C Emulation, version 0.4 [ 20.312] (II) Loading sub module "dri2" [ 20.312] (II) LoadModule: "dri2" [ 20.312] (II) Module "dri2" already built-in [ 20.312] (==) Depth 24 pixmap format is 32 bpp [ 20.312] (II) intel(0): [DRI2] Setup complete [ 20.312] (II) intel(0): [DRI2] DRI driver: i965 [ 20.312] (II) intel(0): Allocated new frame buffer 1920x1080 stride 7680, tiled [ 20.312] (II) UXA(0): Driver registered support for the following operations: [ 20.312] (II) solid [ 20.312] (II) copy [ 20.312] (II) composite (RENDER acceleration) [ 20.312] (II) put_image [ 20.312] (II) get_image [ 20.312] (==) intel(0): Backing store disabled [ 20.312] (==) intel(0): Silken mouse enabled [ 20.312] (II) intel(0): Initializing HW Cursor [ 20.312] (II) intel(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 20.313] (**) intel(0): DPMS enabled [ 20.313] (==) intel(0): Intel XvMC decoder enabled [ 20.313] (II) intel(0): Set up textured video [ 20.313] (II) intel(0): [XvMC] xvmc_vld driver initialized. [ 20.313] (II) intel(0): direct rendering: DRI2 Enabled [ 20.313] (==) intel(0): hotplug detection: "enabled" [ 20.332] (--) RandR disabled [ 20.335] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 20.335] (II) intel(0): Setting screen physical size to 508 x 285 [ 20.338] (II) XKB: reuse xkmfile /var/lib/xkb/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm [ 20.340] (II) config/udev: Adding input device Power Button (/dev/input/event1) [ 20.340] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 20.340] (II) LoadModule: "evdev" [ 20.340] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • Azure Diagnostics: The Bad, The Ugly, and a Better Way

    - by jasont
    If you’re a .Net web developer today, no doubt you’ve enjoyed watching Windows Azure grow up over the past couple of years. The platform has scaled, stabilized (mostly), and added on a slew of great (and sometimes overdue) features. What was once just an endpoint to host a solution, developers today have tremendous flexibility and options in the platform. Organizations are building new solutions and offerings on the platform, and others have, or are in the process of, migrating existing applications out of their own data centers into the Azure cloud. Whether new application development or migrating legacy, every development shop and IT organization needs to monitor their applications in the cloud, the same as they do on premises. Azure Diagnostics has some capabilities, but what I constantly hear from users is that it’s either (a) not enough, or (b) too cumbersome to set up. Today, Stackify is happy to announce that we fully support Azure deployments, just the same as your on-premises deployments. Let’s take a look below and compare and contrast the options. Azure Diagnostics Let’s crack open the Windows Azure documentation on Azure Diagnostics and see just how easy it is to use. The high level steps are:   Step 1: Import the Diagnostics Oh, I’ve already deployed my app without the diagnostics module. Guess I can’t do anything until I do this and re-deploy. Step 2: Configure the Diagnostics (and multiple sub-steps) Do I want it all? Or just pieces of it? Whoops, forgot to include a specific performance counter, I guess I’ll have to deploy again. Wait a minute… I have to specifically code these performance counters into my role’s OnStart() method, compile and deploy again? And query and consume it myself? Step 3: (Optional) Permanently store diagnostic data Lucky for me, Azure storage has gotten pretty cheap. But how often should I move the data into storage? I want to see real-time data, so I guess that’s out now as well. Step 4: (Optional) View stored diagnostic data Optional? Of course I want to see it. Conveniently, Microsoft recommends 3 tools to do this with. Un-conveniently, none of these are web based and they all just give you access to raw data, and very little charting or real-time intelligence. Just….. data. Nevermind that one product seems to have gotten stale since a recent acquisition, and doesn’t even have screenshots!   So, let’s summarize: lots of diagnostics data is available, but think realistically. Think Dev Ops. What happens when you are in the middle of a major production performance issue and you don’t have the diagnostics you need? You are redeploying an application (and thankfully you have a great branching strategy, so you feel perfectly safe just willy-nilly launching code into prod, don’t you?) to get data, then shipping it to storage, and then digging through that data to find a needle in a haystack. Would you like to be able to troubleshoot a performance issue in the middle of the night, or on a weekend, from your iPad or home computer’s web browser? Forget it: the best you get is this spark line in the Azure portal. If it’s real pointy, you probably have an issue; but since there is no alert based on a threshold your customers have likely already let you know. And high CPU, Memory, I/O, or Network doesn’t tell you anything about where the problem is. The Better Way – Stackify Stackify supports application and server monitoring in real time, all through a great web interface. All of the things that Azure Diagnostics provides, Stackify provides for your on-premises deployments, and you don’t need to know ahead of time that you’ll need it. It’s always there, it’s always on. Azure deployments are essentially no different than on-premises. It’s a Windows Server (or Linux) in the cloud. It’s behind a different firewall than your corporate servers. That’s it. Stackify can provide the same powerful tools to your Azure deployments in two simple steps. Step 1 Add a startup task to your web or worker role and deploy. If you can’t deploy and need it right now, no worries! Remote Desktop to the Azure instance and you can execute a Powershell script to download / install Stackify.   Step 2 Log in to your account at www.stackify.com and begin monitoring as much as you want, as often as you want and see the results instantly. WMI? It’s there Event Viewer? You’ve got it. File System Access? Yes, please! Would love to make sure my web.config is correct.   IIS / App Pool Info? Yep. You can even restart it. Running Services? All of them. Start and Stop them to your heart’s content. SQL Database access? You bet’cha. Alerts and Notification? Of course! You should know before your customers let you know. … and so much more.   Conclusion Microsoft has shown, consistently, that they love developers, developers, developers. What every developer needs to realize from this is that they’ve given you a canvas, which is exactly what Azure is. It’s great infrastructure that is readily available, easy to manage, and fairly cost effective. However, the tooling is your responsibility. What you get, at best, is bare bones. App and server diagnostics should be available when you need them. While we, as developers, try to plan for and think of everything ahead of time, there will come times where we need to get data that just isn’t available. And having to go through a lot of cumbersome steps to get that data, and then have to find a friendlier way to consume it…. well, that just doesn’t make a lot of sense to me. I’d rather spend my time writing and developing features and completing bug fixes for my applications, than to be writing code to monitor and diagnose.

    Read the article

  • 5 Best Practices - Laying the Foundation for WebCenter Projects

    - by Kellsey Ruppel
    Today’s guest post comes from Oracle WebCenter expert John Brunswick. John specializes in enterprise portal and content management solutions and actively contributes to the enterprise software business community and has authored a series of articles about optimal business involvement in portal, business process management and SOA development, examining ways of helping organizations move away from monolithic application development. We’re happy to have John join us today! Maximizing success with Oracle WebCenter portal requires a strategic understanding of Oracle WebCenter capabilities.  The following best practices enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration. They are inherently project agnostic, enabling a strong foundation for future growth and an expedient return on your investment in the platform.  If you are able to embrace even only a few of these practices, you will materially improve your deployment capability with WebCenter. 1. Segment Duties Around 3Cs - Content, Collaboration and Contextual Data "Agility" is one of the most common business benefits touted by modern web platforms.  It sounds good - who doesn't want to be Agile, right?  How exactly IT organizations go about supplying agility to their business counterparts often lacks definition - hamstrung by ambiguity. Ultimately, businesses want to benefit from reduced development time to deliver a solution to a particular constituent, which is augmented by as much self-service as possible to develop and manage the solution directly. All done in the absence of direct IT involvement. With Oracle WebCenter's depth in the areas of content management, pallet of native collaborative services, enterprise mashup capability and delegated administration, it is very possible to execute on this business vision at a technical level. To realize the benefits of the platform depth we can think of Oracle WebCenter's segmentation of duties along the lines of the 3 Cs - Content, Collaboration and Contextual Data.  All three of which can have their foundations developed by IT, then provisioned to the business on a per role basis. Content – Oracle WebCenter benefits from an extremely mature content repository.  Work flow, audit, notification, office integration and conversion capabilities for documents (HTML & PDF) make this a haven for business users to take control of content within external and internal portals, custom applications and web sites.  When deploying WebCenter portal take time to think of areas in which IT can provide the "harness" for content to reside, then allow the business to manage any content items within the site, using the content foundation to ensure compliance with business rules and process.  This frees IT to work on more mission critical challenges and allows the business to respond in short order to emerging market needs. Collaboration – Native collaborative services and WebCenter spaces are a perfect match for business users who are looking to enable document sharing, discussions and social networking.  The ability to deploy the services is granular and on the basis of roles scoped to given areas of the system - much like the first C “content”.  This enables business analysts to design the roles required and IT to provision with peace of mind that users leveraging the collaborative services are only able to do so in explicitly designated areas of a site. Bottom line - business will not need to wait for IT, but cannot go outside of the scope that has been defined based on their roles. Contextual Data – Collaborative capabilities are most powerful when included within the context of business data.  The ability to supply business users with decision shaping data that they can include in various parts of a portal or portals, just as they would with content items, is one of the most powerful aspects of Oracle WebCenter.  Imagine a discussion about new store selection for a retail chain that re-purposes existing information from business intelligence services about various potential locations and or custom backend systems - presenting it directly in the context of the discussion.  If there are some data sources that are preexisting in your enterprise take a look at how they can be made into discrete offerings within the portal, then scoped to given business user roles for inclusion within collaborative activities. 2. Think Generically, Execute Specifically Constructs.  Anyone who has spent much time around me knows that I am obsessed with this word.  Why? Because Constructs offer immense power - more than APIs, Web Services or other technical capability. Constructs offer organizations the ability to leverage a platform's native characteristics to offer substantial business functionality - without writing code.  This concept becomes more powerful with the additional understanding of the concepts from the platform that an organization learns over time.  Let's take a look at an example of where an Oracle WebCenter construct can substantially reduce the time to get a subscription-based site out the door and into the hands of the end consumer. Imagine a site that allows members to subscribe to specific disciplines to access information and application data around that various discipline.  A space is a collection of secured pages within Oracle WebCenter.  Spaces are not only secured, but also default content stored within it to be scoped automatically to that space. Taking this a step further, Oracle WebCenter’s Activity Stream surfaces events, discussions and other activities that are scoped to the given user on the basis of their space affiliations.  In order to have a portal that would allow users to "subscribe" to information around various disciplines - spaces could be used out of the box to achieve this capability and without using any APIs or low level technical work to achieve this. 3. Make Governance Work for You Imagine driving down the street without the painted lines on the road.  The rules of the road are so ingrained in our minds, we often do not think about the process, but seemingly mundane lane markers are critical enablers. Lane markers allow us to travel at speeds that would be impossible if not for the agreed upon direction of flow. Additionally and more importantly, it allows people to act autonomously - going where they please at any given time. The return on the investment for mobility is high enough for people to buy into globally agreed up governance processes. In Oracle WebCenter we can use similar enablers to lane markers.  Our goal should be to enable the flow of information and provide end users with the ability to arrive at business solutions as needed, not on the basis of cumbersome processes that cannot meet the business needs in a timely fashion. How do we do this? Just as with "Segmentation of Duties" Oracle WebCenter technologies offer the opportunity to compartmentalize various business initiatives from each other within the system due to constructs and security that are available to use within the platform. For instance, when a WebCenter space is created, any content added within that space by default will be secured to that particular space and inherits meta data that is associated with a folder created for the space. Oracle WebCenter content uses meta data to support a broad range of rich ECM functionality and can automatically impart retention, workflow and other policies automatically on the basis of what has been defaulted for that space. Depending on your business needs, this paradigm will also extend to sub sections of a space, offering some interesting possibilities to enable automated management around content. An example may be press releases within a particular area of an extranet that require a five year retention period and need to the reviewed by marketing and legal before release.  The underlying content system will transparently take care of this process on the basis of the above rules, enabling peace of mind over unstructured data - which could otherwise become overwhelming. 4. Make Your First Project Your Second Imagine if Michael Phelps was competing in a swimming championship, but told right before his race that he had to use a brand new stroke.  There is no doubt that Michael is an outstanding swimmer, but chances are that he would like to have some time to get acquainted with the new stroke. New technologies should not be treated any differently.  Before jumping into the deep end it helps to take time to get to know the new approach - even though you may have been swimming thousands of times before. To quickly get a handle on Oracle WebCenter capabilities it can be helpful to deploy a sandbox for the team to use to share project documents, discussions and announcements in an effort to help the actual deployment get under way, while increasing everyone’s knowledge of the platform and its functionality that may be helpful down the road. Oracle Technology Network has made a pre-configured virtual machine available for download that can be a great starting point for this exercise. 5. Get to Know the Community If you are reading this blog post you have most certainly faced a software decision or challenge that was solved on the basis of a small piece of missing critical information - which took substantial research to discover.  Chances were also good that somewhere, someone had already come across this information and would have been excited to share it. There is no denying the power of passionate, connected users, sharing key tips around technology.  The Oracle WebCenter brand has a rich heritage that includes industry-leading technology and practitioners.  With the new Oracle WebCenter brand, opportunities to connect with these experts has become easier. Oracle WebCenter Blog Oracle Social Enterprise LinkedIn WebCenter Group Oracle WebCenter Twitter Oracle WebCenter Facebook Oracle User Groups Additionally, there are various Oracle WebCenter related blogs by an excellent grouping of services partners.

    Read the article

  • Refactoring Part 1 : Intuitive Investments

    - by Wes McClure
    Fear, it’s what turns maintaining applications into a nightmare.  Technology moves on, teams move on, someone is left to operate the application, what was green is now perceived brown.  Eventually the business will evolve and changes will need to be made.  The approach to those changes often dictates the long term viability of the application.  Fear of change, lack of passion and a lack of interest in understanding the domain often leads to a paranoia to do anything that doesn’t involve duct tape and bailing twine.  Don’t get me wrong, those have a place in the short term viability of a project but they don’t have a place in the long term.  Add to it “us versus them” in regards to the original team and those that maintain it, internal politics and other factors and you have a recipe for disaster.  This results in code that quickly becomes unmanageable.  Even the most clever of designs will eventually become sub optimal and debt will amount that exponentially makes changes difficult.  This is where refactoring comes in, and it’s something I’m very passionate about.  Refactoring is about improving the process whereby we make change, it’s an exponential investment in the process of change. Without it we will incur exponential complexity that halts productivity. Investments, especially in the long term, require intuition and reflection.  How can we tackle new development effectively via evolving the original design and paying off debt that has been incurred? The longer we wait to ask and answer this question, the more it will cost us.  Small requests don’t warrant big changes, but realizing when changes now will pay off in the long term, and especially in the short term, is valuable. I have done my fair share of maintaining applications and continuously refactoring as needed, but recently I’ve begun work on a project that hasn’t had much debt, if any, paid down in years.  This is the first in a series of blog posts to try to capture the process which is largely driven by intuition of smaller refactorings from other projects. Signs that refactoring could help: Testability How can decreasing test time not pay dividends? One of the first things I found was that a very important piece often takes 30+ minutes to test.  I can only imagine how much time this has cost historically, but more importantly the time it might cost in the coming weeks: I estimate at least 10-20 hours per person!  This is simply unacceptable for almost any situation.  As it turns out, about 6 hours of working with this part of the application and I was able to cut the time down to under 30 seconds!  In less than the lost time of one week, I was able to fix the problem for all future weeks! If we can’t test fast then we can’t change fast, nor with confidence. Code is used by end users and it’s also used by developers, consider your own needs in terms of the code base.  Adding logic to enable/disable features during testing can help decouple parts of an application and lead to massive improvements.  What exactly is so wrong about test code in real code?  Often, these become features for operators and sometimes end users.  If you cannot run an integration test within a test runner in your IDE, it’s time to refactor. Readability Are variables named meaningfully via a ubiquitous language? Is the code segmented functionally or behaviorally so as to minimize the complexity of any one area? Are aspects properly segmented to avoid confusion (security, logging, transactions, translations, dependency management etc) Is the code declarative (what) or imperative (how)?  What matters, not how.  LINQ is a great abstraction of the what, not how, of collection manipulation.  The Reactive framework is a great example of the what, not how, of managing streams of data. Are constants abstracted and named, or are they just inline? Do people constantly bitch about the code/design? If the code is hard to understand, it will be hard to change with confidence.  It’s a large undertaking if the original designers didn’t pay much attention to readability and as such will never be done to “completion.”  Make sure not to go over board, instead use this as you change an application, not in lieu of changes (like with testability). Complexity Simplicity will never be achieved, it’s highly subjective.  That said, a lot of code can be significantly simplified, tidy it up as you go.  Refactoring will often converge upon a simplification step after enough time, keep an eye out for this. Understandability In the process of changing code, one often gains a better understanding of it.  Refactoring code is a good way to learn how it works.  However, it’s usually best in combination with other reasons, in effect killing two birds with one stone.  Often this is done when readability is poor, in which case understandability is usually poor as well.  In the large undertaking we are making with this legacy application, we will be replacing it.  Therefore, understanding all of its features is important and this refactoring technique will come in very handy. Unused code How can deleting things not help? This is a freebie in refactoring, it’s very easy to detect with modern tools, especially in statically typed languages.  We have VCS for a reason, if in doubt, delete it out (ok that was cheesy)! If you don’t know where to start when refactoring, this is an excellent starting point! Duplication Do not pray and sacrifice to the anti-duplication gods, there are excellent examples where consolidated code is a horrible idea, usually with divergent domains.  That said, mediocre developers live by copy/paste.  Other times features converge and aren’t combined.  Tools for finding similar code are great in the example of copy/paste problems.  Knowledge of the domain helps identify convergent concepts that often lead to convergent solutions and will give intuition for where to look for conceptual repetition. 80/20 and the Boy Scouts It’s often said that 80% of the time 20% of the application is used most.  These tend to be the parts that are changed.  There are also parts of the code where 80% of the time is spent changing 20% (probably for all the refactoring smells above).  I focus on these areas any time I make a change and follow the philosophy of the Boy Scout in cleaning up more than I messed up.  If I spend 2 hours changing an application, in the 20%, I’ll always spend at least 15 minutes cleaning it or nearby areas. This gives a huge productivity edge on developers that don’t. Ironically after a short period of time the 20% shrinks enough that we don’t have to spend 80% of our time there and can move on to other areas.   Refactoring is highly subjective, never attempt to refactor to completion!  Learn to be comfortable with leaving one part of the application in a better state than others.  It’s an evolution, not a revolution.  These are some simple areas to look into when making changes and can help get one started in the process.  I’ve often found that refactoring is a convergent process towards simplicity that sometimes spans a few hours but often can lead to massive simplifications over the timespan of weeks and months of regular development.

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • CodePlex Daily Summary for Monday, July 15, 2013

    CodePlex Daily Summary for Monday, July 15, 2013Popular ReleasesMVC Forum: MVC Forum v1.0: Finally version 1.0 is here! We have been fixing a few bugs, and making sure the release is as stable as possible. We have also changed the way configuration of the application works, mostly how to add your own code or replace some of the standard code with your own. If you download and use our software, please give us some sort of feedback, good or bad!SharePoint 2013 TypeScript Definitions: Release 1.1: TypeScript 0.9 support SharePoint TypeScript Definitions are now compliant with the new version of TypeScript TypeScript generics are now used for defining collection classes (derivatives of SP.ClientCollection object) Improved coverage Added mQuery definitions (m$) Added SPClientAutoFill definitions SP.Utilities namespace is now fully covered SP.UI modal dialog definitions improved CSR definitions improved, added some missing methods and context properties, mostly related to list ...GoAgent GUI: GoAgent GUI ??? 1.0.0: ????GoAgent GUI????,???????????.Net Framework 4.0 ???????: Windows 8 x64 Windows 8 x86 Windows 7 x64 Windows 7 x86 ???????: Windows 8.1 Preview (x86/x64) ???????: Windows XP ????: ????????GoAgent????,????????,?????????????,???????????????????,??????????,????。PiGraph: PiGraph 2.0.8.13: C?p nh?t:Các l?i dã s?a: S?a l?i không nh?p du?c s? âm. L?i tabindex trong giao di?n Thêm hàm Các l?i chua kh?c ph?c: L?i ghi chú nh?p nháy màu. L?i khung ghi chú vu?t ra kh?i biên khi luu file. Luu ý:N?u không kh?i d?ng duoc chuong trình, b?n nên c?p nh?t driver card d? h?a phiên b?n m?i nh?t: AMD Graphics Drivers NVIDIA Driver Xem yêu c?u h? th?ngD3D9Client: D3D9Client R12 for Orbiter Beta: D3D9Client release for orbiter BetaVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Project Server 2013 Event Handler Admin Tool: PSI Event Admin Tool: Download & exract the File. Use LoggerAdmin to upload the event handlers in project server 2013. PSIEventLogger\LoggerAdmin\bin\DebugGherkin editor: Gherkin Editor Beta 2: Fix issue #7 and add some refactoring and code cleanupNew-NuGetPackage PowerShell Script: New-NuGetPackage.ps1 PowerShell Script v1.2: Show nuget gallery to push to when prompting user if they want to push their package.Site Templates By Steve: SharePoint 2010 CORE Site Theme By Steve WSP: Great Site Theme to start with from Steve. See project home page for install instructions. This is a nice centered, mega-menu, fixed width masterpage with styles. Remember to update the mega menu lists.SharePoint Solution Installer: SharePoint Solution Installer V1.2.8: setup2013.exe now supports CompatibilityLevel to target specific hive Use setup.exe for SP2007 & SP2010. Use setup2013.exe for SP2013.TBox - tool to make developer's life easier.: TBox 1.021: 1)Add console unit tests runner, to run unit tests in parallel from console. Also this new sub tool can save valid xml nunit report. It can help you in continuous integration. 2)Fix build scripts.LifeInSharepoint Modern UI Update: Version 2: Some minor improvements, including Audience Targeting support for quick launch links. Also removing all NextDocs references.Virtual Photonics: VTS MATLAB Package 1.0.13 Beta: This is the beta release of the MATLAB package with examples of using the VTS libraries within MATLAB. Changes for this release: Added two new examples to vts_solver_demo.m that: 1) generates and plots R(lambda) at a given rho, and chromophore concentrations assuming a power law for scattering, and 2) solves inverse problem for R(lambda) at given rho. This example solves for concentrations of HbO2, Hb and H20 given simulated measurements created using Nurbs scaled Monte Carlo and inverted u...Advanced Resource Tab for Blend: Advanced Resource Tab: This is the first alpha release of the advanced resource tab for Blend for Visual Studio 2012.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsLINQ to Twitter: LINQ to Twitter v2.1.07: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.DotNetNuke® Community Edition CMS: 06.02.08: Major Highlights Fixed issue where the application throws an Unhandled Error and an HTTP Response Code of 200 when the connection to the database is lost. Security FixesNone Updated Modules/Providers ModulesNone ProvidersNoneNew Projects[.Net Intl] harroc_c;mallar_a;olouso_f: The goal of this project is to create a web crawler and a web front who allows you to search in your index. You will create a mini (or large!) search engine basButterfly Storage: Butterfly Storage is a data access technology based on object-oriented database model for Windows Store applications.KaveCompany: KaveCompleave that girl alone: a team project!MyClrProfiler: This project helps you learn about and develop your own CLR profiler.NETDeob: Deobfuscate obfuscated .NET files easilyProgram stomatologie: SummarySimple Graph Library: Simple portable class library for graphs data structures. .NET, Silverlight 4/5, Windows Phone, Windows RT, Xbox 360T6502 Emulator: T6502 is a 6502 emulator written in TypeScript using AngularJS. The goal is well-organized, readable code over performance.WP8 File Access Webserver: C# HTTP server and web application on Windows Phone 8. Implements file access, browsing and downloading.wpadk: wpadk????wp7?????? ?????????,?????、SDK、wpadk?????????????。??????????????????。??????????????????,????wpadk?????????????????????????????????????。xlmUnit: xlmUnit, Unit Testing

    Read the article

  • 7 Good Reasons to Upgrade E-Business Suite to the cloud

    - by Lisa Schwartz
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} As promised here is blog Part 2: Why Upgrade to Oracle E-Business Suite 12 in the cloud? 7 Good Reasons to Upgrade to E-Business Suite 12 in the Cloud: 1)   Take advantage of new and improved features: from global sub-ledger accounting to mobile access for supply chain management to built-in extensions for information search and discovery. If you haven’t checked out the latest features yet, there are over 1000 EBS 12 enhancements. 2) Plan now to address any ongoing Oracle Support considerations and regulatory compliance requirements. EBS Release 11 support is ending soon. Based upon that information alone, you should have an EBS upgrade strategy and planning well underway. 3) Customizations got you worried? Expedite your next Oracle E-Business Suite upgrade – have Oracle identify all customizations, reduce un-needed customizations (EBS 12 has built-in many of your customizations) and during the upgrade keep all necessary customizations to run your business. 4) Migrating EBS to the cloud allows parallel migration and testing. Therefore no extra hardware purchases for the testing and upgrade. Business disruption is minimized. And, by moving to the cloud, this provides for smoother future upgrades that are based on your own timeline. 5) Oracle Experts will upgrade and run your EBS applications for you in the cloud. Free your IT resources to develop new services and work on projects that are critical to business innovation and competitiveness. Your IT resources will not be inundated with upgrade tasks!      6) Reallocate precious IT dollars to other projects, eliminate CapEx costs. 7) Oracle minimizes business risk by having enterprise class cloud services under stringent SLAs designed to run your business applications for you such as: a. Enterprise grade infrastructure b. World-class security and identity management c. Best practices in regulatory compliance: from classified federal gov’t standards, to healthcare HIPPA standards to meeting Financial Services requirements (PCI DSS) Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 7 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Next Step: To help you upgrade and get to the cloud in the shortest period of  time, Oracle has a program called Oracle Upgrade Factory for Oracle E-Business Suite 12. It offers a unique approach, seamlessly bundling Managed Cloud Services and Oracle Consulting Services together for an entire Oracle E-Business Suite upgrade and migration to a managed private  cloud. Read the Oracle Upgrade Factory Solution Brief here. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • krb5-multidev, libk5crypto3, libk5crypto3:i386 package dependency

    - by TDalton
    Using Ubuntu 12.04 Asus U43F I can no longer update, install or remove packages via software center because of package dependency errors. sudo apt-get install -f reads the following: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libunity6 libqapt-runtime libboost-program-options1.46.1 akonadi-backend-mysql libqapt1 shared-desktop-ontologies libntrack0 ntrack-module-libnl-0 libntrack-qt4-1 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: krb5-multidev libk5crypto3:i386 libkrb5-dev Suggested packages: krb5-doc krb5-doc:i386 krb5-user:i386 The following packages will be upgraded: krb5-multidev libk5crypto3:i386 libkrb5-dev 3 upgraded, 0 newly installed, 0 to remove and 325 not upgraded. 11 not fully installed or removed. Need to get 0 B/213 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: error processing libk5crypto3 (--configure): libk5crypto3:amd64 1.10+dfsg~beta1-2ubuntu0.3 cannot be configured because libk5crypto3:i386 is in a different version (1.10+dfsg~beta1-2ubuntu0.1) dpkg: error processing libk5crypto3:i386 (--configure): libk5crypto3:i386 1.10+dfsg~beta1-2ubuntu0.1 cannot be configured because libk5crypto3:amd64 is in a different version (1.10+dfsg~beta1-2ubuntu0.3) dpkg: dependency problems prevent configuration of libkrb5-3: libkrb5-3 depends on libk5crypto3 (>= 1.9+dfsg~beta1); however:No apport report written because MaxReports is reached already Package libk5crypto3 is not configured yet. dpkg: error processing libkrb5-3 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssapi-krb5-2: libgssapi-krb5-2 depends on libk5crypto3 (>= 1.8+dfsg); however: Package libk5crypto3 is not configured yet. libgssapi-krb5-2 depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.3); however: Package libkrb5-3 is not configured yet. dpkg: error processing libgssapi-krb5-2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssrpc4: libgssrpc4 depends on libgssapi-krb5-2 (>= 1.10+dfsg~); however: Package libgssapi-krb5-2 is not configured yet. dpkg: error processing libgssrpc4 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkadm5srv-mit8: libkadm5srv-mit8 depends on libgssapi-krb5-2 (>= 1.6.dfsg.2); however: Package libgssapi-krb5-2 is not configured yet. libkadm5srv-mit8 depends on libgssrpc4 (>= 1.6.dfsg.2); however: Package libgssrpc4 is not configured yet. libkadm5srv-mit8 depends on libk5crypto3 (>= 1.6.dfsg.2); however: Package libk5crypto3 is not configured yet. libkadm5srv-mit8 depends on libkrb5-3 (>= 1.9+dfsg~beta1); however: Package libkrb5-3 is not configured yet. dpkg: error processing libkadm5srv-mit8 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libkadm5clnt-mit8: libkadm5clnt-mit8 depends on libgssapi-krb5-2 (>= 1.10+dfsg~); however: Package libgssapi-krb5-2 is not configured yet. libkadm5clnt-mit8 depends on libgssrpc4 (>= 1.6.dfsg.2); however: Package libgssrpc4 is not configured yet.No apport report written because MaxReports is reached already libkadm5clnt-mit8 depends on libk5crypto3 (>= 1.6.dfsg.2); however: Package libk5crypto3 is not configured yet. libkadm5clnt-mit8 depends on libkrb5-3 (>= 1.8+dfsg); however: Package libkrb5-3 is not configured yet. dpkg: error processing libkadm5clnt-mit8 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of krb5-multidev: krb5-multidev depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkrb5-3 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libk5crypto3 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libk5crypto3 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libgssapi-krb5-2 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libgssapi-krb5-2 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libgssrpc4 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libgssrpc4 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libkadm5srv-mit8 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkadm5srv-mit8 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libkadm5clnt-mit8 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkadm5clnt-mit8 on system is 1.10+dfsg~beta1-2ubuntu0.3. dpkg: error processing krb5-multidev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkrb5-dev: libkrb5-dev depends on krb5-multidev (= 1.10+dfsg~beta1-2ubuntu0.2); however: Package krb5-multidev is not configured yet. dpkg: error processing libkrb5-dev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkrb5-3:i386: libkrb5-3:i386 depends on libk5crypto3 (>= 1.9+dfsg~beta1); however: Package libk5crypto3:i386 is not configured yet. dpkg: error processing libkrb5-3:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssapi-krb5-2:i386: libgssapi-krb5-2:i386 depends on libk5crypto3 (>= 1.8+dfsg); however: Package libk5crypto3:i386 is not configured yet. libgssapi-krb5-2:i386 depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.3); however: Package libkrb5-3:i386 is not configured yet. dpkg: error processing libgssapi-krb5-2:i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libk5crypto3 libk5crypto3:i386 libkrb5-3 libgssapi-krb5-2 libgssrpc4 libkadm5srv-mit8 libkadm5clnt-mit8 krb5-multidev libkrb5-dev libkrb5-3:i386 libgssapi-krb5-2:i386 E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried to fix the broken dependencies via synaptic package manager, but it returns with an error: E: libk5crypto3: libk5crypto3:amd64 1.10+dfsg~beta1-2ubuntu0.3 cannot be configured because libk5crypto3 E: libkrb5-3: dependency problems - leaving unconfigured E: libgssapi-krb5-2: dependency problems - leaving unconfigured E: libgssrpc4: dependency problems - leaving unconfigured E: libkadm5srv-mit8: dependency problems - leaving unconfigured E: libkadm5clnt-mit8: dependency problems - leaving unconfigured E: krb5-multidev: dependency problems - leaving unconfigured E: libkrb5-dev: dependency problems - leaving unconfigured I haven't gotten help from ubuntuforums.org on this issue. Please help, obi-wan

    Read the article

  • Need personal advice on how to get out of a company..

    - by SOfan
    Hi, I am an SO user since past 6 months and this is the first time I am turning to SO for personal help. I have asked technical questions before with my real ID. I am stuck inside a service based IT company for the past one year and haven't been able to decide if to leave it, when to leave it and how to leave it. I had taken 2 weeks LWP on medical reason roughly at end of 1 year and then soon after reporting, I applied for 2 months more LWP (on medical/personal ground) with the intention of working on my health,take up a hobby class to ward off depression,pessimism, to have some fun in life, and to look for a job which I really would be excited about - that interests me and which matches with my strength. My leave starts from this Monday. So in any case, I had hard set in mind that I will leave the company after I join them back hopefully with some job offer already in hand (after figuring out what I want do). Neither I can stand the past project,past colleagues,company, HR, pathetically low salary. But if I really listen to my heart, I don't want to have to go back to that office after my sabbatical and again have to see those people. I will have to resign it after my sabbatical ends. Then HR people perhaps wont like it, may even accuse me on face or behind back that primary purpose of my leave must have been to hunt for a better job and I lied about medical and person reasons. Also, if they get nasty and force me to serve 2 months notice period. There is no way I see myself after sabbatical resuming in old project or starting new work. It will be a pain. Since they have already approved 2 months leave and stuff, ideally if they want, they should be just able to relieve me right on the next day after I join back. But, I don't know if they want to get nasty, will they mention about my 2 months sabbatical leave in my experience letter or more scary, the term medical/personal reason. I have hard earned my experience here, have worked against my will, mostly it has been painful and slogged like anything, because I realize the importance of work experience in IT industry. I don't have greed to have those 2 months included extra in my experience letter, but I don't want to mess up with my experience letter in a way which makes my next employer ask question, get suspicious, or be wary if I have any medical reason going on. Being an emotional,moody person or somebody who can't be in an environment, once my mind and heart starts hating it. I think it perhaps is best, if I resign on Monday itself telling them (in polite manner) something that look I took sabbatical for some reason but I don't want to resume working in the company after my sabbatical ends. So please accept my resignation. Now tell me what you want to do about my leave request, my notice period and when you are willing to relieve me. What should I write and how? Some background: I am working in an IT company in India.I am overqualified in the company. It is grossly underpaying me. My education qualifications far exceed anyone's in the whole company being a CS undergrad as well as a CS grad. I joined this company after finishing the grad. I had self-doubts about my skills and interest as a programmer. I like doing research oriented work, though didn't have any particular success during grad. My life here has been very hectic. The project containing many many sub-projects has kept me on my toes and I have never really liked the work. I have been playing against my strength. Also the company strict internet usage policy (you can't read gmails, can't browse any non-work related sites not even news). When working for a client, from the machine we can't even check company related emails.For this one has to go to kiosk like 5 machines in a small room etc. Most of the times those machines are not available, so it was not unusual to keep making rounds to these kiosk machines to check company emails, browse company related emails etc.So it was not so easy to keep in touch with company related basic affairs for a not particular careful person. Things like this which are new to me, make me feel restricted. I am an undecisive person with a sense of failure, self-doubt, not meeting up unrealistic expectation. Somewhere at back of mind, I envy my classmates who make a smooth transition from company to company without causing any gap in their resume. I on other hand have gaps in resume. I get tired after working in a place for sometime. problem with colleagues in general. I am not particular great with people, have few friends, not known for a fun nature, rather serious, scholar. I am not a typical conventional female. I think females are usually more disciplined. But I am not so. I reach office late (though after informing manager). I don't want to blame them entirely, because from my past, it is not unusual for me to get undecisive on things. Also I had doubts about my ability as researched and to succeed there. of building a relationship in a group, to have something to talk about, newspaper. I get cut-off from people. peer pressure. I make blunders in coding, lose patience. Consciously or unconsciously I feel contempt for people here, work here, environment here. I have doubts that either I go to a place which does innovation, does research oriented work, product biggies, have great motivated people, have competent people passionate about products they are building. But then I also doubt my ability to survive there. I have identified that an idea job for me would be 4 days a week, a high salary job. When among people in company/team, I can't think much. I need some time at home to read good authentic books written in good style on what work I am doing.So that I am comfortable with my understanding of work. I get into pressure easily under deadline and need 5th day to cool myself off. I took for 2 weeks leave, because each day was hell for me. May be the depression phase of bipolar is on and also partially it could be that being a work centered person, who derives happiness,self-esteem from work, haven't been enjoying work and have been working for the sole person of proving stability, and ability to stick, against all odds, and facing what challenges I see, bonding with people, identifying opportunities to learn in given task etc.have been averaging one day LWP in 1 week or 10 days. or may be because of my nature,ADD,not being able to switch context,out of touch with news, don't have a circle of friends with who I enjoy. less knowledge in general to talk about, just some technical stuff.anyway, so due to emotional reason, some practical reason etc, I wanted to be very sure before leaving. So my leave starts from Monday and I should feel happy about it. I have taken the leave to for a few purposes - to take care of my health by regular yoga/exercise (with project on, I just can't do anything regular), reassess myself to see what I want to try next which work I might like, look for next job, take up a hobby which I like say singing. I am not clear on my career,job aspiration. I have tried my hands on research. During this year appraisal yesterday, I even had some conflict with my last manager. In meeting with me one on one, he would say all nice things about me, but in feedback to new manager, he hasn't given any excellent feedback. It is all only good. I am angry at this old Manager. Also new manager also scolded me as I didn't agree to his appraisal and waited to hear myself from old Manager. He kind of scolded me for wasting his time. Am I being unethical somewhere? I am always very conscious of if I am cheating anywhere. What advice I am seeking? How to resign and what to write in resignation letter

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

  • Error while installation of CHMSee

    - by Anshuman Chakraborty
    I have recently migrated from Windows to Ubuntu. My current locale shows below output :- cha@COMPUTER:~$ locale LANG=en_IN LANGUAGE=en_IN:en LC_CTYPE="en_IN" LC_NUMERIC="en_IN" LC_TIME="en_IN" LC_COLLATE="en_IN" LC_MONETARY="en_IN" LC_MESSAGES="en_IN" LC_PAPER="en_IN" LC_NAME="en_IN" LC_ADDRESS="en_IN" LC_TELEPHONE="en_IN" LC_MEASUREMENT="en_IN" LC_IDENTIFICATION="en_IN" LC_ALL= When I am trying to install CHMSee (or any other Application) using UBUNTU Software Center. I am getting below error. installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Selecting previously unselected package libchm1. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 207053 files and directories currently installed.) Unpacking libchm1 (from .../libchm1_2%3a0.40a-1_i386.deb) ... Selecting previously unselected package libjavascriptcoregtk-1.0-0. Unpacking libjavascriptcoregtk-1.0-0 (from .../libjavascriptcoregtk-1.0-0_1.8.0-0ubuntu2_i386.deb) ... Selecting previously unselected package libwebkitgtk-1.0-common. Unpacking libwebkitgtk-1.0-common (from .../libwebkitgtk-1.0-common_1.8.0-0ubuntu2_all.deb) ... Selecting previously unselected package libwebkitgtk-1.0-0. Unpacking libwebkitgtk-1.0-0 (from .../libwebkitgtk-1.0-0_1.8.0-0ubuntu2_i386.deb) ... Selecting previously unselected package chmsee. Unpacking chmsee (from .../chmsee_1.3.0-2ubuntu2_i386.deb) ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up qmail (1.06-4) ... The hostname -f command returned: $1 Your system needs to have a fully qualified domain name (fqdn) in order to install the var-qmail packages. Installation aborted. dpkg: error processing qmail (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of qmail-run: qmail-run depends on qmail (>= 1.06-2.1); however: Package qmail is not configured yet. dpkg: error processing qmail-run (--configure): dependency problems - leaving unconfigured Setting up libchm1 (2:0.40a-1) ... No apport report written because the error message indicates its a followup error from a previous failure. Setting up libjavascriptcoregtk-1.0-0 (1.8.0-0ubuntu2) ... Setting up libwebkitgtk-1.0-common (1.8.0-0ubuntu2) ... Setting up libwebkitgtk-1.0-0 (1.8.0-0ubuntu2) ... Setting up chmsee (1.3.0-2ubuntu2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: qmail qmail-run Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up qmail (1.06-4) ... The hostname -f command returned: $1 Your system needs to have a fully qualified domain name (fqdn) in order to install the var-qmail packages. Installation aborted. dpkg: error processing qmail (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of qmail-run: qmail-run depends on qmail (>= 1.06-2.1); however: Package qmail is not configured yet. dpkg: error processing qmail-run (--configure): dependency problems - leaving unconfigured Can someone please help me in resolving this issue. The elaboration would be most appreciated since I am very new to this. Thanks, Anshuman Chakraborty

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

  • Sudo apt-get update -f does not work?

    - by BrianO09
    I am a bit of a noob with Linux. Several months ago I updated to Ubuntu 12.04, then stopped using Ubuntu for a while for a variety of reasons. Now I would like to go back to it, but I have a couple of problems. For one thing, the Software Center will simply not load. I click on the icon, the program comes up, but it never loads, and when I close it I get a "window not responding" message. While reading some threads to fix this issue, the common theme was that the main solution was to update by running: sudo apt-get install --reinstall software-center However, when I run that, I get the following (long): bcoleary@ubuntu:~$ sudo apt-get install --reinstall software-center [sudo] password for bcoleary: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kdelibs-bin : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed kdelibs5-plugins : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkntlm4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed kdoctools : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkcmutils4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkde3support4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkpty4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkdeclarative5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkdewebkit5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkdnssd4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkemoticons4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkfile4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkhtml5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkidletime4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkio5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkjsembed4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkmediaplayer4 : Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libknewstuff3-4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libknotifyconfig4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkparts4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkrosscore4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libktexteditor4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libnepomuk4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libnepomukquery4a : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libnepomukutils4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libplasma3 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libthreadweaver4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So the next thing I tried was: sudo apt-get -f install The following has been cut down, but you get the idea: Errors were encountered while processing: libkdeclarative5 libkcmutils4 libnepomuk4 libkio5 libnepomukquery4a libnepomukutils4 libkparts4 libkdewebkit5 libkdnssd4 libknewstuff3-4 libplasma3 libnepomuksync4 libkemoticons4 libkfile4 libktexteditor4 libkhtml5 libkidletime4 libkmediaplayer4 libknotifyconfig4 libnepomukdatamanagement4 libkde3support4 libkjsembed4 libkrosscore4 kdoctools kdelibs-bin libkatepartinterfaces4 katepart kdelibs5-plugins plasma-scriptengine-javascript kde-runtime amarok libkdcraw20 libkgeomap1 libkipi8 libkvkontakte1 kipi-plugins digikam libkonq-common libkonq5abi1 dolphin kde-baseapps-bin kdebase-runtime libkcddb4 kdemultimedia-kio-plugins kdepimlibs-kio-plugins libkonqsidebarplugin4a konqueror konqueror-nsplugins libakonadi-kde4 libakonadi-calendar4 libkabc4 Processing was halted because there were too many errors. E: Sub-process /usr/bin/dpkg returned an error code (1) Basically it said a ton of stuff was missing. Maybe this happened when I upgraded, I am not sure. Is there a way to fix this? And if not what is the best way to un-install and re-install Ubuntu? It is currently dual-booted with Windows 7. If you need anymore info, please let me know. Thank you for helping a beginner! :)

    Read the article

  • MVC Razor Engine For Beginners Part 1

    - by Humprey Cogay, C|EH, E|CSA
    I. What is MVC? a. http://www.asp.net/mvc/tutorials/older-versions/overview/asp-net-mvc-overview II. Software Requirements for this tutorial a. Visual Studio 2010/2012. You can get your free copy here Microsoft Visual Studio 2012 b. MVC Framework Option 1 - Install using a standalone installer http://www.microsoft.com/en-us/download/details.aspx?id=30683 Option 2 - Install using Web Platform Installer http://www.microsoft.com/web/handlers/webpi.ashx?command=getinstallerredirect&appid=MVC4VS2010_Loc III. Creating your first MVC4 Application a. On the Visual Studio click file new solution link b. Click Other Project Type>Visual Studio Solutions and on the templates window select blank solution and let us name our solution MVCPrimer. c. Now Click File>New and select Project d. Select Visual C#>Web> and select ASP.NET MVC 4 Web Application and Enter MyWebSite as Name e. Select Empty, Razor as view engine and uncheck Create a Unit test project f. You can now view a basic MVC 4 Application Structure on your solution explorer g. Now we will add our first controller by right clicking on the controllers folder on your solution explorer and select Add>Controller h. Change the name of the controller to HomeController and under the scaffolding options select Empty MVC Controller. i. You will now see a basic controller with an Index method that returns an ActionResult j. We will now add a new View Folder for our Home Controller. Right click on the views folder on your solution explorer and select Add> New Folder> and name this folder Home k. Add a new View by right clicking on Views>Home Folder and select Add View. l. Name the view Index, and select Razor(CSHTML) as View Engine, All checkbox should be unchecked for now and click add. m. Relationship between our HomeController and Home Views Sub Folder n. Add new HTML Contents to our newly created Index View o. Press F5 to run our MVC Application p. We will create our new model, Right click on the models folder of our solution explorer and select Add> Class. q. Let us name our class Customer r. Edit the Customer class with the following code s. Open the HomeController by double clickin HomeController of our Controllers folder and edit the HomeControllerusing System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc;   namespace MyWebSite.Controllers {     public class HomeController : Controller     {         //         // GET: /Home/           public ActionResult Index()         {             return View();         }           public ActionResult ListCustomers()         {             List<Models.Customer> customers = new List<Models.Customer>();               //Add First Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 1,                         CompanyName = "Volvo",                         ContactNo = "123-0123-0001",                         ContactPerson = "Gustav Larson",                         Description = "Volvo Car Corporation, or Volvo Personvagnar AB, is a Scandinavian automobile manufacturer founded in 1927"                     });                 //Add Second Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 2,                         CompanyName = "BMW",                         ContactNo = "999-9876-9898",                         ContactPerson = "Franz Josef Popp",                         Description = "Bayerische Motoren Werke AG,  (BMW; English: Bavarian Motor Works) is a " +                                       "German automobile, motorcycle and engine manufacturing company founded in 1917. "                     });                 //Add Third Customer to Our Collection             customers.Add(new Models.Customer()             {                 Id = 3,                 CompanyName = "Audi",                 ContactNo = "983-2222-1212",                 ContactPerson = "Karl Benz",                 Description = " is a multinational division of the German manufacturer Daimler AG,"             });               return View(customers);         }     } } t. Let us now create a view for this Class, But before continuing Press Ctrl + Shift + B to rebuild the solution, this will make the previously created model on the Model class drop down of the Add View Menu. Right click on the views>Home folder and select Add>View u. Let us name our View as ListCustomers, Select Razor(CSHTML) as View Engine, Put a check mark on Create a strongly-typed view, and select Customer (MyWebSite.Models) as model class. Slect List on the Scaffold Template and Click OK. v. Run the MVC Application by pressing F5, and on the address bar insert Home/ListCustomers, We should now see a web page similar below.   x. You can edit ListCustomers.CSHTML to remove and add HTML codes @model IEnumerable<MyWebSite.Models.Customer>   @{     Layout = null; }   <!DOCTYPE html>   <html> <head>     <meta name="viewport" content="width=device-width" />     <title>ListCustomers</title> </head> <body>     <h2>List of Customers</h2>     <table border="1">         <tr>             <th>                 @Html.DisplayNameFor(model => model.CompanyName)             </th>             <th>                 @Html.DisplayNameFor(model => model.Description)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactPerson)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactNo)             </th>         </tr>         @foreach (var item in Model) {         <tr>             <td>                 @Html.DisplayFor(modelItem => item.CompanyName)             </td>             <td>                 @Html.DisplayFor(modelItem => item.Description)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactPerson)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactNo)             </td>                   </tr>     }         </table> </body> </html> y. Press F5 to run the MVC Application   z. You will notice some @HTML.DisplayFor codes. These are called HTML Helpers you can read more about HTML Helpers on this site http://www.w3schools.com/aspnet/mvc_htmlhelpers.asp   That’s all. You now have your first MVC4 Razor Engine Web Application . . .

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >