Search Results

Search found 12992 results on 520 pages for 'password recovery'.

Page 432/520 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • How To: LIC of India Online Policy Payments And Status Enquiries

    - by Kavitha
    Life Insurance Corporation (LIC) of India is the largest state-owned insurance company in India and also the country’s largest investor. The premium  amount for the insurance policies purchased from LIC are paid by visiting the nearest LIC office or by taking help of LIC agents. It’s a time consuming process and most of us are fed up of standing in long queues at LIC offices for paying premium amount. LIC Online Services Website The worries are not any more, no need to stand in a long queue or approach an agent for paying your LIC policies. LIC of India has an online payment and also renewal facility : http://licindia.in. To pay the policies online we have to register with LIC and login to the site using the registered username and password. Once you login, you can enter your profile information and LIC policies that are purchased on your name(register the policies that are purchased  only on your name, otherwise you land in to troubles). Once registered, managing activities of like payments, loan eligibility checking, policy maturity, etc. are very easy. For online payment of policies you can find Pay Premium Online tab which when clicked takes you to a page that lists all the policies that are due. Payments can be made using credit/debit cards and online banking systems. Almost all the Indian banks are covered as part of the online payment system. Other services that are available through the online system of LIC are : View ULIP Policies,Premium Calendar, Calculate Loan Eligibility, Revival Quote, Policy Maturity, Address Change Requests, etc. LIC Policy Status Enquiry Through Phone LIC also has a helpline/customer care  number ‘1251‘. You can call 1251 to know about  your policy status, premium due date, Loan possibility and loan amount possible, time of maturity etc. This article titled,How To: LIC of India Online Policy Payments And Status Enquiries, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • How to recover dpkg from corrupted downloads?

    - by rocker9455
    I just upgraded to the 10.10 RC earlier and had a few problems with graphic drivers (x didnt start) But i have remedied that now. When i run 'sudo apt-get install -f' i get this: will@UbuntuBox:/mnt/slax$ sudo apt-get install -f [sudo] password for will: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libmono-wcf3.0-cil openoffice.org-calc openoffice.org-core The following NEW packages will be installed: libmono-wcf3.0-cil The following packages will be upgraded: openoffice.org-calc openoffice.org-core 2 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 16 not fully installed or removed. Need to get 0B/32.5MB of archives. After this operation, 1,929kB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 201565 files and directories currently installed.) Preparing to replace openoffice.org-calc 1:3.2.1-6ubuntu2~10.04.1 (using .../openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb) ... Unpacking replacement openoffice.org-calc ... xz: (stdin): Compressed data is corrupt dpkg-deb: subprocess <decompress> returned error exit status 1 dpkg: error processing /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/openoffice/basis3.2/program/libscfiltli.so' dpkg: regarding .../openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb containing openoffice.org-core: openoffice.org-core conflicts with openoffice.org-calc (<< 1:3.2.1-7ubuntu1) openoffice.org-calc (version 1:3.2.1-6ubuntu2~10.04.1) is present and installed. dpkg: error processing /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): conflicting packages - not installing openoffice.org-core Unpacking libmono-wcf3.0-cil (from .../libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb) ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:0>: data error' dpkg-deb: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea how i can get the broken packages fixed? Cheers, Will

    Read the article

  • Claims-based Identity Terminology

    - by kaleidoscope
    There are several terms commonly used to describe claims-based identity, and it is important to clearly define these terms. · Identity In terms of Access Control, the term identity will be used to refer to a set of claims made by a trusted issuer about the user. · Claim You can think of a claim as a bit of identity information, such as name, email address, age, and so on. The more claims your service receives, the more you’ll know about the user who is making the request. · Security Token The user delivers a set of claims to your service piggybacked along with his or her request. In a REST Web service, these claims are carried in the Authorization header of the HTTP(S) request. Regardless of how they arrive, claims must somehow be serialized, and this is managed by security tokens. A security token is a serialized set of claims that is signed by the issuing authority. · Issuing Authority & Identity Provider An issuing authority has two main features. The first and most obvious is that it issues security tokens. The second feature is the logic that determines which claims to issue. This is based on the user’s identity, the resource to which the request applies, and possibly other contextual data such as time of day. This type of logic is often referred to as policy[1]. There are many issuing authorities, including Windows Live ID, ADFS, PingFederate from Ping Identity (a product that exposes user identities from the Java world), Facebook Connect, and more. Their job is to validate some credential from the user and issue a token with an identifier for the user's account and  possibly other identity attributes. These types of authorities are called identity providers (sometimes shortened as IdP). It’s ultimately their responsibility to answer the question, “who are you?” and ensure that the user knows his or her password, is in possession of a smart card, knows the PIN code, has a matching retinal scan, and so on. · Security Token Service (STS) A security token service (STS) is a technical term for the Web interface in an issuing authority that allows clients to request and receive a security token according to interoperable protocols that are discussed in the following section. This term comes from the WS-Trust standard, and is often used in the literature to refer to an issuing authority. STS when used from developer point of view indicates the URL to use to request a token from an issuer. For more details please refer to the link http://www.microsoft.com/windowsazure/developers/dotnetservices/ Geeta, G

    Read the article

  • Access Windows Home Server from an Ubuntu Computer on your Network

    - by Mysticgeek
    If you’re a Windows Home Server user, there may be times when you need to access it from an Ubuntu machine on your network. Today we take a look at the process of accessing files on your home server from Ubuntu. Note: In this example we’re using Windows Home Server with PowerPack 3, and Ubuntu 10.04 running on a home network. Access WHS from Ubuntu To access files on your home server from Ubuntu, click on Places then select Network. You should now see your home server listed in the Network folder as well as other Windows machines…double-click the server to access it. If you don’t see your server listed, you might need to go into Windows Network \ Workgroup and find it there. You’ll be prompted to enter in the correct credentials for WHS just as you would when accessing it from a Windows machine. It’s your choice if you want to have the password remembered or not…make your selection and click Connect. Now you will see the available folders on your home server. In this example we signed in with Administrator credentials, so we have access to everything. Double-click on the folder share you want to access content from…here we see MS Office documents on the server. Or, here we take a look at a music folder with various MP3 files which you can make Ubuntu play. You can access the files directly from the server, provided there is a Linux app that can handle the file type. In this example we opened a Word document in OpenOffice. Here we’re playing an MKV movie file from the server in Totem Movie Player.   You can easily search for files on the server as well… If you want to store your Ubuntu files on WHS it’s just a matter of dragging them to the correct WHS folder you want them in. If you’re using an Ubuntu computer on your home network and need to access files from Windows Home Server, luckily it’s a straight-forward process. You’ll often have to find the correct software to use Windows files, but even that’s getting much easier with version 10.04. Similar Articles Productive Geek Tips Share Ubuntu Home Directories using SambaCreate a Samba User on UbuntuGMedia Blog: Setting Up a Windows Home ServerRestore Files from Backups on Windows Home ServerInstall Samba Server on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets

    Read the article

  • Make Your Coworker’s Day in Ubuntu

    - by Trevor Bekolay
    It can be difficult to express your appreciation for your coworkers in person – what if they take it the wrong way, or think you’re fishing for a compliment of your own? If you use Ubuntu in your office, here’s a quick way to show your appreciation while avoiding the social pitfalls of face-to-face communication. Make sure their computer is locked An unlocked computer is a vulnerable computer. Vulnerable to malware sure, but much more vulnerable to the local office prankster, who thinks it’s hilarious to make a screenshot of your desktop, change your background to that screenshot, then hide all of your desktop icons. These incidents have taught us that you should lock your computer when taking a break. Hopefully your coworker has learned the same lesson, and pressed Ctrl+Alt+L before stepping out for a coffee. Leave a carefully worded message Now is your opportunity to leave your message of appreciation on your coworker’s computer. Click on the Leave Message button and type away! Click on Save. Wait, possibly in the shadows If you sit near your coworker, then wait for them to return. If you sit farther away, then try to listen for their footsteps. Eventually they will return to their computer and enter their password to unlock it. Observe smile Once they return to their desktop, they will be greeted with the message you left. Look to see if they appreciated the message, and if so, feel free to take credit. If they look annoyed, or press the Cancel button, continue on with your day like nothing happened. You may also try to slip into a conversation that you saw Jerry tinkering with their computer earlier. Conclusion Leaving your coworkers a nice message is easy and can brighten up their dull afternoon. We’re pretty sure that this method can only be used for good and not evil, but if you have any other suggestions of messages to leave, let us know in the comments! Similar Articles Productive Geek Tips Make Ubuntu Automatically Save Changes to Your SessionAdding extra Repositories on UbuntuInstall IceWM on Ubuntu LinuxInstall Blackbox on Ubuntu LinuxMake Firefox Display Large Images Full Size TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily

    Read the article

  • How do I fix dependency problems with the kernel in apt?

    - by Jon
    When trying to install new packages, either manually or with muon, I get these errors: jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get install kupfer [sudo] password for jon: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kupfer : Depends: python-keybinder but it is not going to be installed Recommends: python-wnck but it is not going to be installed linux-headers-generic : Depends: linux-headers-3.2.0-20-generic but it is not installable linux-image-generic : Depends: linux-image-3.2.0-20-generic but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). jon@jon-desktop:~/Apps/mendeleydesktop-1.5-dev4-linux-x86_64/bin$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic linux-headers-generic linux-image-generic The following packages will be upgraded: linux-generic linux-headers-generic linux-image-generic 3 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 3 not fully installed or removed. Need to get 0 B/6,658 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-20-generic; however: Package linux-image-3.2.0-20-generic is not installed. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.20.22); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. dpkg: dependency problems prevent configuration of linux-headers-generic: linux-headers-generic depends on linux-headers-3.2.0-20-generic; however: Package linux-headers-3.2.0-20-generic is not installed. dpkg: error processing linux-headers-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-image-generic linux-generic linux-headers-generic E: Sub-process /usr/bin/dpkg returned an error code (1) As indicated above, I ran sudo apt-get -f install but it still tells me there are dependency issues.

    Read the article

  • How to Make Ubuntu Play MP3 Files

    - by Trevor Bekolay
    Because of licensing issues, Ubuntu is unable to play MP3s out of the box. We’ll show you how to play MP3s and other restricted file formats in about four mouse clicks. The philosophy behind Ubuntu is that software should be free and accessible to all. Whether MP3 and other file formats are free is unclear in many countries, so Ubuntu does not include software to read these file formats by default. Fortunately, it does include a package that installs the most commonly used file formats all at once, including a Flash plugin for Firefox. Note: These instructions are for Ubuntu 10.04. There are small differences for earlier versions of Ubuntu. Play MP3 Files Open the Ubuntu Software Center, found in the Applications menu.   Click on View and ensure that All Software is selected. Type “restricted extras” into the search box at the top-right. Find the Ubuntu restricted extras package and click Install. Enter your password when prompted. Once the install is complete, close out of Ubuntu Software Center, and you’ll be able to play MP3 files! To confirm this, we’ll open up Rhythmbox, found in the Sound & Video section of the Applications menu. Our test MP3 plays with no problems! Note: If Rhythmbox tells you that MP3 plugins are not installed, close Rhythmbox and reopen it. You should not have to install anything extra through Rhythmbox.   Despite this extra step, playing the most common audio and video file formats – including Flash videos on the internet – is simple. All the software comes installed, you just have to teach them how to read your files. Similar Articles Productive Geek Tips How to Play .OGM Video Files in Windows VistaView Hidden Files and Folders in Ubuntu File BrowserMake Ubuntu Automatically Save Changes to Your SessionInstalling PHP4 and Apache on UbuntuInstalling PHP5 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • Requesting Delegation (ActAs) Tokens using WSTrustChannel (as opposed to Configuration Madness)

    - by Your DisplayName here!
    Delegation using the ActAs approach has some interesting security features A security token service can make authorization and validation checks before issuing the ActAs token. Combined with proof keys you get non-repudiation features. The ultimate receiver sees the original caller as direct caller and can optionally traverse the delegation chain. Encryption and audience restriction can be tied down Most samples out there (including the SDK sample) use the CreateChannelActingAs extension method from WIF to request ActAs tokens. This method builds on top of the WCF binding configuration which may not always be suitable for your situation. You can also use the WSTrustChannel to request ActAs tokens. This allows direct and programmatic control over bindings and configuration and is my preferred approach. The below method requests an ActAs token based on a bootstrap token. The returned token can then directly be used with the CreateChannelWithIssued token extension method. private SecurityToken GetActAsToken(SecurityToken bootstrapToken) {     var factory = new WSTrustChannelFactory(         new UserNameWSTrustBinding(SecurityMode.TransportWithMessageCredential),         new EndpointAddress(_stsAddress));     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.UserName.UserName = "middletier";     factory.Credentials.UserName.Password = "abc!123";     var rst = new RequestSecurityToken     {         AppliesTo = new EndpointAddress(_serviceAddress),         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Symmetric,         ActAs = new SecurityTokenElement(bootstrapToken)     };     var channel = factory.CreateChannel();     var delegationToken = channel.Issue(rst);     return delegationToken; }   HTH

    Read the article

  • F# and ArcObjects, Part 3

    - by Marko Apfel
    Today i played a little bit with IFeature-sequences and piping data. The result was a calculator of the bounding box around all features in a feature class. Maybe a little bit dirty, but for learning was it OK. ;-) open System;; #I "C:\Program Files\ArcGIS\DotNet";; #r "ESRI.ArcGIS.System.dll";; #r "ESRI.ArcGIS.DataSourcesGDB.dll";; #r "ESRI.ArcGIS.Geodatabase.dll";; #r "ESRI.ArcGIS.Geometry.dll";; open ESRI.ArcGIS.esriSystem;; open ESRI.ArcGIS.DataSourcesGDB;; open ESRI.ArcGIS.Geodatabase;; open ESRI.ArcGIS.Geometry; let aoInitialize = new AoInitializeClass();; let status = aoInitialize.Initialize(esriLicenseProductCode.esriLicenseProductCodeArcEditor);; let workspacefactory = new SdeWorkspaceFactoryClass();; let connection = "SERVER=okul;DATABASE=p;VERSION=sde.default;INSTANCE=sde:sqlserver:okul;USER=s;PASSWORD=g";; let workspace = workspacefactory.OpenFromString(connection, 0);; let featureWorkspace = (box workspace) :?> IFeatureWorkspace;; let featureClass = featureWorkspace.OpenFeatureClass("Praxair.SFG.BP_L_ROHR");; let queryFilter = new QueryFilterClass();; let featureCursor = featureClass.Search(queryFilter, true);; let featureCursorSeq (featureCursor : IFeatureCursor) = let actualFeature = ref (featureCursor.NextFeature()) seq { while (!actualFeature) <> null do yield actualFeature do actualFeature := featureCursor.NextFeature() };; let min x y = if x < y then x else y;; let max x y = if x > y then x else y;; let info s (x : IEnvelope) = printfn "%s xMin:{%f} xMax: {%f} yMin:{%f} yMax: {%f}" s x.XMin x.XMax x.YMin x.YMax;; let con (env1 : IEnvelope) (env2 : IEnvelope) = let env = (new EnvelopeClass()) :> IEnvelope env.XMin <- min env1.XMin env2.XMin env.XMax <- max env1.XMax env2.XMax env.YMin <- min env1.YMin env2.YMin env.YMax <- max env1.YMax env2.YMax info "Intermediate" env env;; let feature = featureClass.GetFeature(100);; let ext = feature.Extent;; let BoundingBox featureClassName = let featureClass = featureWorkspace.OpenFeatureClass(featureClassName) let queryFilter = new QueryFilterClass() let featureCursor = featureClass.Search(queryFilter, true) let featureCursorSeq (featureCursor : IFeatureCursor) = let actualFeature = ref (featureCursor.NextFeature()) seq { while (!actualFeature) <> null do yield actualFeature do actualFeature := featureCursor.NextFeature() } featureCursorSeq featureCursor |> Seq.map (fun feature -> (!feature).Extent) |> Seq.fold (fun (acc : IEnvelope) a -> info "Intermediate" acc (con acc a)) ext ;; let boundingBox = BoundingBox "Praxair.SFG.BP_L_ROHR";; info "Ende-Info:" boundingBox;;

    Read the article

  • Fix: SqlDeploy Task Fails with NullReferenceException at ExtractPassword

    Still working on getting a TeamCity build working (see my last post).  Latest exception is: C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(120, 5): error MSB4018: The "SqlDeployTask" task failed unexpectedly. System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Data.Schema.Common.ConnectionStringPersistence.ExtractPassword(String partialConnection, String dbProvider) at Microsoft.Data.Schema.Common.ConnectionStringPersistence.RetrieveFullConnection(String partialConnection, String provider, Boolean presentUI, String password) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.ConfigureConnectionString(String connectionString, String databaseName) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.OnBuildConnectionString(String partialConnectionString, String databaseName) at Microsoft.Data.Schema.Build.Deployment.FinishInitialize(String targetConnectionString) at Microsoft.Data.Schema.Build.Deployment.Initialize(FileInfo sourceDbSchemaFile, ErrorManager errors, String targetConnectionString) at Microsoft.Data.Schema.Build.DeploymentConstructor.ConstructServiceImplementation() at Microsoft.Data.Schema.Extensibility.ServiceConstructor'1.ConstructService() at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult)   This time searching yielded some good stuff, including this thread that talks about how to resolve this via permissions.  The short answer is that the account that your build server runs under needs to have the necessary permissions in SQL Server.  Youll need to create a Login and then ensure at least the minimum rights are configured as described here: Required Permissions in Database Edition Alternately, you can just make your buildserver account an admin on the database (which is probably running on the same machine anyway) and at that point it should be able to do whatever it needs to. If youre certain the account has the necessary permissions, but youre still getting the error, the problem may be that the account has never logged into the build server.  In this case, there wont be any entry in the HKCU hive in the registry, which the system is checking for permissions (see this thread).  The solution in this case is quite simple: log into the machine (once is enough) with the build server account.  Then, open Visual Studio (thanks Brendan for the answer in this thread). Summary Make sure the build service account has the necessary database permissions Make sure the account has logged into the server so it has the necessary registry hive info Make sure the account has run Visual Studio at least once so its settings are established In my case I went through all 3 of these steps before I resolved the problem. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using MAC Authentication for simple Web API’s consumption

    - by cibrax
    For simple scenarios of Web API consumption where identity delegation is not required, traditional http authentication schemas such as basic, certificates or digest are the most used nowadays. All these schemas rely on sending the caller credentials or some representation of it in every request message as part of the Authorization header, so they are prone to suffer phishing attacks if they are not correctly secured at transport level with https. In addition, most client applications typically authenticate two different things, the caller application and the user consuming the API on behalf of that application. For most cases, the schema is simplified by using a single set of username and password for authenticating both, making necessary to store those credentials temporally somewhere in memory. The true is that you can use two different identities, one for the user running the application, which you might authenticate just once during the first call when the application is initialized, and another identity for the application itself that you use on every call. Some cloud vendors like Windows Azure or Amazon Web Services have adopted an schema to authenticate the caller application based on a Message Authentication Code (MAC) generated with a symmetric algorithm using a key known by the two parties, the caller and the Web API. The caller must include a MAC as part of the Authorization header created from different pieces of information in the request message such as the address, the host, and some other headers. The Web API can authenticate the caller by using the key associated to it and validating the attached MAC in the request message. In that way, no credentials are sent as part of the request message, so there is no way an attacker to intercept the message and get access to those credentials. Anyways, this schema also suffers from some deficiencies that can generate attacks. For example, brute force can be still used to infer the key used for generating the MAC, and impersonate the original caller. This can be mitigated by renewing keys in a relative short period of time. This schema as any other can be complemented with transport security. Eran Rammer, one of the brains behind OAuth, has recently published an specification of a protocol based on MAC for Http authentication called Hawk. The initial version of the spec is available here. A curious fact is that the specification per se does not exist, and the specification itself is the code that Eran initially wrote using node.js. In that implementation, you can associate a key to an user, so once the MAC has been verified on the Web API, the user can be inferred from that key. Also a timestamp is used to avoid replay attacks. As a pet project, I decided to port that code to .NET using ASP.NET Web API, which is available also in github under https://github.com/pcibraro/hawknet Enjoy!.

    Read the article

  • How to Assign a Static IP to an Ubuntu 10.04 Desktop Computer

    - by Mysticgeek
    If you have a home network with several computers, assigning them static IP addresses can make troubleshooting easier. Today we take a look at switching from DHCP to a static IP in Ubuntu. Assign a Static IP Using Static IPs prevents address conflicts between machines and can allow easier access to them. If you have a small home network and are satisfied with the machines getting their IP address automatically via DHCP, there won’t be anything gained by using static addresses. Using Static IPs isn’t necessarily for the average user, but if you’re a geek who wants to know the address assigned to each machine, it can allow for faster troubleshooting.  To change your Ubuntu machine to a Static IP go to System \ Preferences \ Network Connections. In our example, we’re on a wired system so click on the Wired tab, then select Auto eth0 and click on Edit. Select the IPv4 settings tab, change Method to Manual, click the Add button. Then type in the Static IP Address, Subnet Mask, DNS Servers, and Default Gateway. Then click Apply when you’re finished. Make sure to hit Enter after typing in the Default Gateway otherwise it will revert back to 0.0.0.0 You’ll need to enter in your admin password before the changes go into affect. To verify the changes have been made successfully launch a Terminal session and type in ifconfig at the command prompt, or follow these directions. You also might want to ping the address from another machine to make sure everything is communicating. If you want to assign a Static IP to your Windows machines, check out our article on how to assign a Static IP on Windows systems (make sure to browse the comments as our readers have some good suggestions).  Whether you have a small office or home network set up with a server and several machines, using a Static IP on each device can help you manage them easily. Again, it isn’t for everyone as it really depends on how your network is setup and the way you use it. Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressAllow Remote Control To Your Desktop On UbuntuAssign Custom Shortcut Keys on Ubuntu LinuxKeyboard Ninja: 21 Keyboard Shortcut ArticlesChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries

    Read the article

  • Quickie Guide Getting Java Embedded Running on Raspberry Pi

    - by hinkmond
    Gary C. and I did a Bay Area Java User Group presentation of how to get Java Embedded running on a RPi. See: here. But, if you want the Quickie Guide on how to get Java up and running on the RPi, then follow these steps (which I'm doing right now as we speak, since I got my RPi in the mail on Monday. Woo-hoo!!!). So, follow along at home as I do the same steps here on my board... 1. Download the Win32DiskImager if you are on Windows, or use dd on a Linux PC: https://launchpad.net/win32-image-writer/0.6/0.6/+download/win32diskimager-binary.zip 2. Download the RPi Debian Wheezy image from here: http://files.velocix.com/c1410/images/debian/7/2012-08-08-wheezy-armel/2012-08-08-wheezy-armel.zip 3. Insert a blank 4GB SD Card into your Windows or Linux PC. 4. Use either Win32DiskImager or Linux dd to burn the unzipped image from #2 to the SD Card. 5. Insert the SD Card into your RPi. Connect an Ethernet cable to your RPi to your network. Connect the RPi Power Adapter. 6. The RPi will boot onto your network. Find its IP address using Windows Wireshark or Linux: sudo tcpdump -vv -ieth0 port 67 and port 68 7. ssh to your RPi: ssh <ip_addr_rpi> -l pi <Password: "raspberry"> 8. Download Java SE Embedded: http://www.oracle.com/technetwork/java/embedded/downloads/javase/index.html NOTE: First click accept, then choose the first bundle in the list: ARMv6/7 Linux - Headless EABI, VFP, SoftFP ABI, Little Endian - ejre-7u6-fcs-b24-linux-arm-vfp-client_headless-10_aug_2012.tar.gz 9. scp the bundle from #8 to your RPi: scp <ejre-bundle> pi@<ip_addr_rpi> 10. mkdir /usr/local, untar the bundle from #9 and rename (move) the ejre1.7.0_06 directory to /usr/local/java That's it! You are ready to roll with Java Embedded on your RPi. Hinkmond

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused. In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words. Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups. To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today. Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica. To start we are going to right click on the availability group that needs to be restarted and select failover. This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS). You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover. Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode. Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial. In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully. Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery. Finally, once the failover is completed you will see the following screen. If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring. Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below. Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online. -- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect SQL2012PROD1 ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS; GO Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Changing the default installation path to a newly installed hard disk

    - by mgj
    Hi, I am currently working on a dual-booted PC. I am using Windows XP and Ubuntu 10.04 Lucid Lynx released in April 2010. The allocated partition to Ubuntu that I am making use of has almost exhausted. Current memory allocations on the PC wrt Ubuntu OS looks like this: bodhgaya@pc146724-desktop:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 8.6G 8.0G 113M 99% / none 998M 268K 998M 1% /dev none 1002M 580K 1002M 1% /dev/shm none 1002M 100K 1002M 1% /var/run none 1002M 0 1002M 0% /var/lock none 1002M 0 1002M 0% /lib/init/rw /dev/sda1 25G 16G 9.8G 62% /media/C /dev/sdb1 37G 214M 35G 1% /media/ubuntulinuxstore bodhgaya@pc146724-desktop:~$ cd /tmp I am trying to mount a 40GB(/dev/sdb1 - given below) new hard disk along with my existing Ubuntu system to overcome with hard disk space related issues. I referred to the following tutorial to mount a new hard disk onto the system:- http://www.smorgasbord.net/how-to-in...untu-linux%20/ I was able to successfully mount this hard disk for Ubuntu 0S. I have this new hard disk setup in /media/ubuntulinuxstore directory. The current partition in my system looks like this: bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ sudo fdisk -l [sudo] password for bodhgaya: Disk /dev/sda: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x446eceb5 Device Boot Start End Blocks Id System /dev/sda1 * 2 3264 26210047+ 7 HPFS/NTFS /dev/sda2 3265 4385 9004432+ 83 Linux /dev/sda3 4386 4863 3839535 82 Linux swap / Solaris Disk /dev/sdb: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfa8afa8a Device Boot Start End Blocks Id System /dev/sdb1 1 4862 39053983+ 7 HPFS/NTFS bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ Now, I have a concern wrt the "location" where the new softwares will be installed. Generally softwares are installed via the terminal and by default a fixed path is used to where the post installation set up files can be found (I am talking in context of the drive). This is like the typical case of Windows, where softwares by default are installed in the C: drive. These days people customize their installations to a drive which they find apt to serve their purpose (generally based on availability of hard disk space). I am trying to figure out how to customize the same for Ubuntu. As we all know the most softwares are installed via commands given from the Terminal. My road block is how do I redirect the default path set on the terminal where files get installed to this new hard disk. This if done will help me overcome space constraints I am currently facing wrt the partition on which my Ubuntu is initially installed. I would also by this, save time on not formatting my system and reinstalling Ubuntu and other softwares all over again. Please help me with this, your suggestions are much appreciated.

    Read the article

  • mysql completely removing

    - by Dmitry Teplyakov
    I broke my mysql and now I want to completely reinstall it. I tried: $ sudo apt-get install --reinstall mysql-server $ sudo apt-get remove --purge mysql-client mysql-server But always I see popup with proposal to change root password, I change it and got an error that I can change it.. $ sudo apt-get remove --purge mysql-client mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed The following packages were automatically installed and are no longer required: libmygpo-qt1 libqtscript4-network libqtscript4-gui libtag-extras1 libqtscript4-sql libqtscript4-xml amarok-utils amarok-common libqtscript4-uitools liblastfm0 libloudmouth1-0 libqtscript4-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.28-0ubuntu0.12.04.2) ... 121114 19:04:03 [Note] Plugin 'FEDERATED' is disabled. 121114 19:04:03 InnoDB: The InnoDB memory heap is disabled 121114 19:04:03 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 19:04:03 InnoDB: Compressed tables use zlib 1.2.3.4 121114 19:04:03 InnoDB: Initializing buffer pool, size = 128.0M 121114 19:04:03 InnoDB: Completed initialization of buffer pool InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 0 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 640 pages, max 0 (relevant if non-zero) pages! 121114 19:04:03 InnoDB: Could not open or create data files. 121114 19:04:03 InnoDB: If you tried to add new data files, and it failed here, 121114 19:04:03 InnoDB: you should now edit innodb_data_file_path in my.cnf back 121114 19:04:03 InnoDB: to what it was, and remove the new ibdata files InnoDB created 121114 19:04:03 InnoDB: in this failed attempt. InnoDB only wrote those files full of 121114 19:04:03 InnoDB: zeros, but did not yet use them in any way. But be careful: do not 121114 19:04:03 InnoDB: remove old data files which contain your precious data! 121114 19:04:03 [ERROR] Plugin 'InnoDB' init function returned error. 121114 19:04:03 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121114 19:04:03 [ERROR] Unknown/unsupported storage engine: InnoDB 121114 19:04:03 [ERROR] Aborting 121114 19:04:03 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) It is good for me that I have not any important databases, but..

    Read the article

  • Oracle SOA Security for OUAF Web Services

    - by Anthony Shorten
    With the ability to use Oracle SOA Suite 11g with the Oracle Utilities Application Framework based products, an additional consideration needs to be configured to ensure correct integration. That additional consideration is security. By default, SOA Suite propagates any credentials from the calling application through to the interfacing applications. In most cases, this behavior is not appropriate as the calling application may use different credential stores and also some interfaces are “disconnected” from a calling application (for example, a file based load using the File Adapter). These situations require that the Web Service calls to the Oracle Utilities Application Framework based products have their own valid credentials. To do this the credentials must be attached at design time or at run time to provide the necessary credentials for the call. There are a number of techniques that can be used to do this: At design time, when integrating a Web Service from an Oracle Utilities Application Framework based product you can attach the security policy “oracle/wss_username_token_client_policy” in the composite.xml view. In this view select the Web Service you want to attach the policy to and right click to display the context menu and select “Configure WS Policies” and select the above policy from the list. If you are using SSL then you can use “oracle/wss_username_token_over_ssl_client_policy” instead. At design time, you can also specify the credential key (csf-key) associated with the above policy by selecting the policy and clicking “Edit Config Override Properties”. You name the key appropriately. Everytime the SOA components are deployed the credential configuration is also sent. You can also do this after deployment, or what I call at “runtime”, by specifying the policy and credential key in the Fusion Middleware Control. Refer to the Fusion Middleware Control documentation on how to do this. To complete the configuration you need to add a map and the key specified earlier to the credential store in the Oracle WebLogic instance used for Oracle SOA Suite. From Fusion Middleware Control, you do this by selecting the domain the SOA Suite is installed in a select “Credentials” from the context menu. You now need to add the credentials by adding the map “oracle.wsm.security” (the name is IMPORTANT) and creating a key with the necessary valid credentials. The example below added a key called “mdm.key”. The name I used is for example only. You can name the key anything you like as long as it corresponds to the key you specified in the design time component. Note: I used SYSUSER as an example credentials in the example, in real life you would use another credential as SYSUSER is not appropriate for production use. This key can be reused for other Oracle Utilities Application Framework Web Service integrations or you can use other keys for individual Web Service calls. Once the key is created and the SOA Suite components deployed the transactions should be able to be called as necessary. If you need to change the password for the credentials it can be done using the Fusion Middleware Control functionality.

    Read the article

  • SQL Saturday 43 in Redmond

    - by AjarnMark
    I attended my first SQLSaturday a couple of days ago, SQLSaturday #43 in Redmond (at Microsoft).  I got there really early, primarily because I forgot how fast I can get there from my home when nobody else is on the road.  On a weekday in rush hour traffic, that would have taken two hours to get there.  I gave myself 90 minutes, and actually got there in about 45.  Crazy! I made the mistake of going to the main Microsoft campus, but that’s not where the event was being held.  Instead it was in a big Microsoft conference center on the other side of the highway.  Fortunately, I had the address with me and quickly realized my mistake.  When I got back on track, I noticed that there were bright yellow signs out on the street corner that looked like they said they were for SOL Saturday, which actually was appropriate since it was the sunniest day around here in a long time. Since I was there so early, the registration was just getting setup, so I found Greg Larsen who was coordinating things and offered to help.  He put me to work with a group of people organizing the pre-printed raffle tickets and stuffing swag bags. I had never been to a SQLSaturday before this one, so I wasn’t exactly sure what to expect even though I have read about a few on some blogs.  It makes sense that each one will be a little bit different since they are almost completely volunteer driven, and the whole concept is still in its early stages.  I have been to the PASS Summit for the last several years, and was hoping for a smaller version of that.  Now, it’s not really fair to compare one free day of training run entirely by volunteers with a multi-day, $1,000+ event put on under the direction of a professional event management company.  But there are some parallels. At this SQLSaturday, there was no opening general session, just coffee and pastries in the common area / expo hallway and straight into the first group of sessions.  I don’t know if that was because there was no single room large enough to hold everyone, or for other reasons.  This worked out okay, but the organization guy in me would have preferred to have even a 15 minute welcome message from the organizers with a little overview of the day.  Even something as simple as, “Thanks to persons X, Y, and Z for helping put this together…Sessions will start in 20 minutes and are all in rooms down this hallway…the bathrooms are on the other side of the conference center…lunch today is pizza and we would like to thank sponsor Q for providing it.”  It doesn’t need to be much, certainly not a full-blown Keynote like at the PASS Summit, but something to use as a rallying point to pull everyone together and get the day off to an official start would be nice.  Again, there may have been logistical reasons why that was not feasible here.  I’m just putting out my thoughts for other SQLSaturday coordinators to consider. The event overall was great.  I believe that there were over 300 in attendance, and everything seemed to run smoothly.  At least from an attendee’s point of view where there was plenty of muffins in the morning and pizza in the afternoon, with plenty of pop to drink.  And hey, if you’ve got the food and drink covered, a lot of other stuff could go wrong and people will be very forgiving.  But as I said, everything appeared to run pretty smoothly, at least until Buck Woody showed up in his Oracle shirt.  Other than that, the volunteers did a great job! I was a little surprised by how few people in my own backyard that I know.  It makes sense if you really think about it, given how many companies must be using SQL Server around here.  I guess I just got spoiled coming into the PASS Summit with a few contacts that I already knew would be there.  Perhaps I have been spending too much time with too few people at the Summits and I need to step out and meet more folks.  Of course, it also is different since the Summit is the big national event and a number of the folks I know are spread out across the country, so the Summit is the only time we’re all in the same place at the same time.  I did make a few new contacts at SQLSaturday, and bumped into a couple of people that I knew (and a couple others that I only knew from Twitter, and didn’t even realize that they were here in the area). Other than the sheer entertainment value of Buck Woody’s session, the one that was probably the greatest value for me was a quick introduction to PowerShell.  I have not done anything with it yet, but I think it will be a good tool to use to implement my plans for automated database recovery testing.  I saw just enough at the session to take away some of the intimidation factor, and I am getting ready to jump in and see what I can put together in the next few weeks.  And that right there made the investment worthwhile.  So I encourage you, if you have the opportunity to go to a SQLSaturday event near you, go for it!

    Read the article

  • Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic

    - by Asian Angel
    Are you looking for a way to manage your Twitter, Facebook, Google Buzz, LinkedIn, and Foursquare accounts all in one place? Using the Seesmic Web App for Chrome and Iron you can access your favorite accounts and manage them in a single, simple-to-use interface. A feature that we loved from the start was the ability to access Twitter without creating a special Seesmic account. And in these days of multiple accounts who needs another one to complicate things up? All that you need to do is to sign in with your user name/e-mail along with your password. You do have to authorize access for Seesmic to connect with your account but the whole process (login & authorization) is handled in a single window instance. Now on to a quick look at some of the UI features… The sidebar allows you to add additional columns to the main interface, set your favorite location for Trends, and tie in additional social services as desired. You can also access additional options and controls in the upper right corner. When you are ready to start tweeting click in the blank at the top and enter your text, etc. in the convenient drop-down window that appears. Another nice perk is the ability to switch to a black and grey theme if the white is too bright for your needs. The Seesmic web app provides a simple-to-use, highly efficient way to manage your Twitter account and other favorite social services in a single tab interface. Seesmic [Chrome Web Store] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic] Orbital Battle for Terra Wallpaper WizMouse Enables Mouse Over Scrolling on Any Window

    Read the article

  • Unable to update/ install any files [closed]

    - by Surya
    Possible Duplicate: “Problem with MergeList” error when trying to do an update Just now I installed ubuntu 12.04 on my Lenovo G570 laptop. First I got an error at the time of installation (don't know about it) and I restarted the system and next time, it went well. So, after installing problems started.. There was a error with "Language recognition" and I tried to fix it but didn't work. I tried to install powerTop to check the status of power management. at terminal: sudo apt-get install powertop This is the error I got surya@surya-Lenovo-G570:~$ sudo apt-get powertop install [sudo] password for surya: E: Invalid operation powertop surya@surya-Lenovo-G570:~$ sudo apt-get install powertop Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ I downloaded Google Chrome .deb one and tried to install but its not working. Software center is opened and its not loading. There was a notification on the status bar which says: An error occurred please run the package manager from the right-click menu ... .... ... E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages "Copy & Paste" from terminal is not really working... When I press Ctrl + C; its showing ^C on terminal but its not working.. The most important error: I am unable to see a "chip" icon on the status bar so as to install proprietary drivers for my ATI drivers... The interesting part is, powertop worked will on live cd and it even detected my ATI card. Update When I opened "Software Up to Date", this showed a error: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' : My laptop details Lenovo G570; Intel 2nd Gen i5 processor 4GB DDR3 RAM Intel in-build graphics + AMD Radeon HD 6370M 1GB graphics. I need help ASAP.

    Read the article

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • JRockit Virtual Edition Debug Key

    - by changjae.lee
    There are a few keys that can help the debugging of the JRVE env in console. you can type in each keys in JRVE console to see what's happening under the hood. key '0' : System information key '5' : Enable shutdown key '7' : Start JRockit Management Server (port 7091) key '8' : Statistics Counters key '9' : Full Thread Dump key '0' : Status of Debug-key Below is the sample out from each keys. Debug-key '1' pressed ============ JRockitVE System Information ============ JRockitVE version : 11.1.1.3.0-67-131044 Kernel version : 6.1.0.0-97-131024 JVM version : R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 Hypervisor version : Xen 3.4.0 Boot state : 0x007effff Uptime : 0 days 02:04:31 CPU : uniprocessor @2327 Mhz CPU usage : 0% ctx/s: 285 preempt/s: 0 migrations/s: 0 Physical pages : 82379/261121 (321/1020 MB) Network info : 10.179.97.64 (10.179.97.64/255.255.254.0) GateWay : 10.179.96.1 MAC address : 00:16:3e:7e:dc:78 Boot options : vfsCwd : /application/user_projects/domains/wlsve_domain mainArgs : java -javaagent:/jrockitve/services/sshd/sshd.jar -cp /jrockitve/jrockit/lib/tools.jar:/jrockitve/lib/common.jar:/application/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/application/wlserver_10.3/server/lib/weblogic.jar -Dweblogic.Name=WlsveAdmin -Dweblogic.Domain=wlsve_domain -Dweblogic.management.username=weblogic -Dweblogic.management.password=welcome1 -Dweblogic.management.GenerateDefaultConfig=true weblogic.Server consLog : /jrockitve/log/jrockitve.log mounts : ext2 / dev0; posixLocale : en_US posixTimezone : Asia/Seoul posixEncoding : ISO-8859-1 Local disk : Size: 1024M, Used: 728M, Free: 295M ======================================================== Debug-key '5' pressed Shutdown enabled. Debug-key '7' pressed [JRockit] Management server already started. Ignoring request. Debug-key '8' pressed Starting stat recording Debug-key '8' pressed ========= Statistics Counters for the last second ========= dev.eth0_rx.cnt : 22 packets dev.eth0_rx_bytes.cnt : 2704 bytes dev.net_interrupts.cnt : 22 interrupts evt.timer_ticks.cnt : 123 ticks hyper.priv_entries.cnt : 144 entries schedule.context_switches.cnt : 271 switches schedule.idle_cpu_time.cnt : 997318849 nanoseconds schedule.idle_cpu_time_0.cnt : 997318849 nanoseconds schedule.total_cpu_time.cnt : 1000031757 nanoseconds time.system_time.cnt : 1000 ns time.timer_updates.cnt : 123 updates time.wallclock_time.cnt : 1000 ns ======================================= Debug-key '9' pressed ===== FULL THREAD DUMP =============== Fri Jun 4 08:22:12 2010 BEA JRockit(R) R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 "Main Thread" id=1 idx=0x4 tid=1 prio=5 alive, in native, waiting -- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method) at java/lang/Object.wait(J)V(Native Method) at java/lang/Object.wait(Object.java:485) at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:919) ^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:479) at weblogic/Server.main(Server.java:67) at jrockit/vm/RNI.c2java(IIIII)V(Native Method) -- end of trace "(Signal Handler)" id=2 idx=0x8 tid=2 prio=5 alive, in native, daemon Open lock chains ================ Chain 1: "ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=23 idx=0x50 tid=20 waiting for java/lang/String@0x630c588 held by: "ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=24 idx=0x54 tid=21 (active) ===== END OF THREAD DUMP =============== Debug-key '0' pressed Debug-keys enabled Happy Cloud Walking :)

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >