Search Results

Search found 16218 results on 649 pages for 'compiler errors'.

Page 298/649 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • nconf nagios config no services defined

    - by user1508056
    I've setup Nagios core on OSX 10.7 server via macports fine. It seems to load fine and the sample config files all copied over to /opt/local/etc/nagios/objects/ fine and are specified correctly in the nagios.cfg file. I then installed nconf manually and got it running without much fight. Then I clicked on "Generate Nagios config" in nconf and get 1 warning and 4 errors. When I expand the error box here what I see: Nagios Core 3.5.0 Copyright (c) 2009-2011 Nagios Core Development Team and Community Contributors Copyright (c) 1999-2009 Ethan Galstad Last Modified: 03-15-2013 License: GPL Website: http://www.nagios.org Reading configuration data... Read main config file okay... Read object config files okay... Running pre-flight check on configuration data... Checking services... Error: There are no services defined! Checked 0 services. Checking hosts... Error: There are no hosts defined! Checked 0 hosts. Checking host groups... Checked 0 host groups. Checking service groups... Checked 0 service groups. Checking contacts... Error: There are no contacts defined! Checked 0 contacts. Checking contact groups... Checked 0 contact groups. Checking service escalations... Checked 0 service escalations. Checking service dependencies... Checked 0 service dependencies. Checking host escalations... Checked 0 host escalations. Checking host dependencies... Checked 0 host dependencies. Checking commands... Checked 0 commands. Checking time periods... Checked 0 time periods. Checking for circular paths between hosts... Checking for circular host and service dependencies... Checking global event handlers... Checking obsessive compulsive processor commands... Checking misc settings... Warning: Nothing specified for illegal_macro_output_chars variable! Total Warnings: 1 Total Errors: 3 I've tried several different things (played with cache settings, changed file permissions/ownership, edited some config files manually, etc.) but nothing gets me past this step. The thing is, when I run 'sudo nagios -v /opt/local/etc/nagios/nagios.cfg' the output shows it is reading a number of services, a localhost, and a contact in the .cfg files...so I'm pretty confident those are ok and the problem is nconf isnt reading the correct .cfg files or something like that. Any ideas what to double check? I did lots of googling and found nothing on this specific issue--so either I'm special (I'm not) or am overlooking something really simple. The path to nagios binary is listed as /opt/local/bin/nagios, if that matters. Also, all the nagios files are owned by nagios:nagios, wheras nconf files are owned by user, with only the directories/files specified in the nconf docs belonging to the _www user and/or group (things like output, temp, config, etc.). Thanks.

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • Converting a Visual Studio 2003 Web Project to a Visual Studio 2008 Web Application Project

    - by navaneeth
    This walkthrough describes how to convert a Visual Studio .NET 2002 or Visual Studio .NET 2003 Web project to a Visual Studio 2008 Web application project. The Visual Studio 2008 Web application project model is like the Visual Studio 2005 Web application project model. Therefore, the conversion processes are similar. For more information about Web application projects, see ASP.NET Web Application Projects. You can also convert from a Visual Studio .NET Web project to a Visual Studio 2008 Web site project. However, conversion to a Web application project is the approach that is supported, and gives you the convenience of tools to help with the conversion. For example, when you convert to a Visual Studio 2008 Web application project, you can use the Visual Studio Conversion Wizard to automate part of the process. For information about how to convert a Visual Studio .NET Web project to a Visual Studio 2008 Web site, see Common Web Project Conversion Issues and Solutions. There are two parts involved in converting a Visual Studio 2002 or 2003 Web project to a Visual Studio 2008 Web application project. The parts are as follows: Converting the project. You can use the Visual Studio Conversion Wizard for the initial conversion of the project and Web.config files. You can later use the Convert To Web Application command to update the project's files and structure. Upgrading the .NET Framework version of the project. You must upgrade the project's .NET Framework version to either .NET Framework 2.0 SP1 or to .NET Framework 3.5. This .NET Framework version upgrade is required because Visual Studio 2008 cannot target earlier versions of the .NET Framework. You can perform this upgrade during the project conversion, by using the Conversion Wizard. Alternatively, you can upgrade the .NET Framework version after you convert the project.   NoteYou can change a project's .NET Framework version manually. To do so, in Visual Studio open the property pages for the project, click the Application tab, and then select a new version from the Target Framework list. This walkthrough illustrates the following tasks: Opening the Visual Studio .NET project in Visual Studio 2008 and creating a backup of the project files. Upgrading the .NET Framework version that the project targets. Converting the project file and the Web.config file. Converting ASP.NET code files. Testing the converted project. Prerequisites    To complete this walkthrough, you will need: Visual Studio 2008. A Web site project that was created in Visual Studio .NET version 2002 or 2003 that compiles and runs without errors. Converting the Project and Upgrading the .NET Framework Version    To begin, you open the project in Visual Studio 2008, which starts the conversion. It offers you an opportunity to back up the project before converting it. NoteIt is strongly recommended that you back up the project. The conversion works on the original project files, which cannot be recovered if the conversion is not successful.To convert the project and back up the files In Visual Studio 2008, in the File menu, click Open and then click Project. The Open Project dialog box is displayed. Browse to the folder that contains the project or solution file for the Visual Studio .NET project, select the file, and then click Open. NoteMake sure that you open the project by using the Open Project command. If you use the Open Web Site command, the project will be converted to the Web site project format.The Conversion Wizard opens and prompts you to create a backup before converting the project. To create the backup, click Yes. Click Browse, select the folder in which the backup should be created, and then click Next. Click Finish. The backup starts. NoteThere might be significant delays as the Conversion Wizard copies files, with no updates or progress indicated. Wait until the process finishes before you continue.When the conversion finishes, the wizard prompts you to upgrade the targeted version of the .NET Framework for the project. To upgrade to the .NET Framework 3.5, click Yes. To upgrade the project to target the .NET Framework 2.0 SP1, click No. It is recommended that you leave the check box selected that asks whether you want to upgrade all Webs in the solution. If you upgrade to .NET Framework 3.5, the project's Web.config file is modified at the same time as the project file. When the upgrade and conversion have finished, a message is displayed that indicates that you have completed the first step in converting your project. Click OK. The wizard displays status information about the conversion. Click Close. Testing the Converted Project    After the conversion has finished, you can test the project to make sure that it runs. This will also help you identify code in the project that must be updated. To verify that the project runs If you know about changes that are required for the code to run with the new version of the .NET Framework, make those changes. In the Build menu, click Build. Any missing references or other compilation issues in the project are displayed in the Error List window. The most likely issues are missing assembly references or issues with dynamically generated types. In Solution Explorer, right-click the Web page that will be used to launch the application, and then click Set as Start Page. On the Debug menu, click Start Debugging. If debugging is not enabled, the Debugging Not Enabled dialog box is displayed. Select the option to add a Web.config file that has debugging enabled, and then click OK. Verify that the converted project runs as expected. Do not continue with the conversion process until all build and run-time errors are resolved. Converting ASP.NET Code Files    ASP.NET Web page files and user-control files in Visual Studio 2008 that use the code-behind model have an associated designer file. The files that you just converted will have an associated code-behind file, but no designer file. Therefore, the next step is to generate designer files. NoteOnly ASP.NET Web pages and user controls that have their code in a separate code file require a separate designer file. For pages that have inline code and no associated code file, no designer file will be generated.To convert ASP.NET code files In Solution Explorer, right-click the project node, and then click Convert To Web Application. The files are converted. Verify that the converted code files have a code file and a designer file. Build and run the project to verify the results of the conversion.

    Read the article

  • Securing an ASP.NET MVC 2 Application

    - by rajbk
    This post attempts to look at some of the methods that can be used to secure an ASP.NET MVC 2 Application called Northwind Traders Human Resources.  The sample code for the project is attached at the bottom of this post. We are going to use a slightly modified Northwind database. The screen capture from SQL server management studio shows the change. I added a new column called Salary, inserted some random salaries for the employees and then turned off AllowNulls.   The reporting relationship for Northwind Employees is shown below.   The requirements for our application are as follows: Employees can see their LastName, FirstName, Title, Address and Salary Employees are allowed to edit only their Address information Employees can see the LastName, FirstName, Title, Address and Salary of their immediate reports Employees cannot see records of non immediate reports.  Employees are allowed to edit only the Salary and Title information of their immediate reports. Employees are not allowed to edit the Address of an immediate report Employees should be authenticated into the system. Employees by default get the “Employee” role. If a user has direct reports, they will also get assigned a “Manager” role. We use a very basic empId/pwd scheme of EmployeeID (1-9) and password test$1. You should never do this in an actual application. The application should protect from Cross Site Request Forgery (CSRF). For example, Michael could trick Steven, who is already logged on to the HR website, to load a page which contains a malicious request. where without Steven’s knowledge, a form on the site posts information back to the Northwind HR website using Steven’s credentials. Michael could use this technique to give himself a raise :-) UI Notes The layout of our app looks like so: When Nancy (EmpID 1) signs on, she sees the default page with her details and is allowed to edit her address. If Nancy attempts to view the record of employee Andrew who has an employeeID of 2 (Employees/Edit/2), she will get a “Not Authorized” error page. When Andrew (EmpID 2) signs on, he can edit the address field of his record and change the title and salary of employees that directly report to him. Implementation Notes All controllers inherit from a BaseController. The BaseController currently only has error handling code. When a user signs on, we check to see if they are in a Manager role. We then create a FormsAuthenticationTicket, encrypt it (including the roles that the employee belongs to) and add it to a cookie. private void SetAuthenticationCookie(int employeeID, List<string> roles) { HttpCookiesSection cookieSection = (HttpCookiesSection) ConfigurationManager.GetSection("system.web/httpCookies"); AuthenticationSection authenticationSection = (AuthenticationSection) ConfigurationManager.GetSection("system.web/authentication"); FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket( 1, employeeID.ToString(), DateTime.Now, DateTime.Now.AddMinutes(authenticationSection.Forms.Timeout.TotalMinutes), false, string.Join("|", roles.ToArray())); String encryptedTicket = FormsAuthentication.Encrypt(authTicket); HttpCookie authCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket); if (cookieSection.RequireSSL || authenticationSection.Forms.RequireSSL) { authCookie.Secure = true; } HttpContext.Current.Response.Cookies.Add(authCookie); } We read this cookie back in Global.asax and set the Context.User to be a new GenericPrincipal with the roles we assigned earlier. protected void Application_AuthenticateRequest(Object sender, EventArgs e){ if (Context.User != null) { string cookieName = FormsAuthentication.FormsCookieName; HttpCookie authCookie = Context.Request.Cookies[cookieName]; if (authCookie == null) return; FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); string[] roles = authTicket.UserData.Split(new char[] { '|' }); FormsIdentity fi = (FormsIdentity)(Context.User.Identity); Context.User = new System.Security.Principal.GenericPrincipal(fi, roles); }} We ensure that a user has permissions to view a record by creating a custom attribute AuthorizeToViewID that inherits from ActionFilterAttribute. public class AuthorizeToViewIDAttribute : ActionFilterAttribute{ IEmployeeRepository employeeRepository = new EmployeeRepository(); public override void OnActionExecuting(ActionExecutingContext filterContext) { if (filterContext.ActionParameters.ContainsKey("id") && filterContext.ActionParameters["id"] != null) { if (employeeRepository.IsAuthorizedToView((int)filterContext.ActionParameters["id"])) { return; } } throw new UnauthorizedAccessException("The record does not exist or you do not have permission to access it"); }} We add the AuthorizeToView attribute to any Action method that requires authorization. [HttpPost][Authorize(Order = 1)]//To prevent CSRF[ValidateAntiForgeryToken(Salt = Globals.EditSalt, Order = 2)]//See AuthorizeToViewIDAttribute class[AuthorizeToViewID(Order = 3)] [ActionName("Edit")]public ActionResult Update(int id){ var employeeToEdit = employeeRepository.GetEmployee(id); if (employeeToEdit != null) { //Employees can edit only their address //A manager can edit the title and salary of their subordinate string[] whiteList = (employeeToEdit.IsSubordinate) ? new string[] { "Title", "Salary" } : new string[] { "Address" }; if (TryUpdateModel(employeeToEdit, whiteList)) { employeeRepository.Save(employeeToEdit); return RedirectToAction("Details", new { id = id }); } else { ModelState.AddModelError("", "Please correct the following errors."); } } return View(employeeToEdit);} The Authorize attribute is added to ensure that only authorized users can execute that Action. We use the TryUpdateModel with a white list to ensure that (a) an employee is able to edit only their Address and (b) that a manager is able to edit only the Title and Salary of a subordinate. This works in conjunction with the AuthorizeToViewIDAttribute. The ValidateAntiForgeryToken attribute is added (with a salt) to avoid CSRF. The Order on the attributes specify the order in which the attributes are executed. The Edit View uses the AntiForgeryToken helper to render the hidden token: ......<% using (Html.BeginForm()) {%><%=Html.AntiForgeryToken(NorthwindHR.Models.Globals.EditSalt)%><%= Html.ValidationSummary(true, "Please correct the errors and try again.") %><div class="editor-label"> <%= Html.LabelFor(model => model.LastName) %></div><div class="editor-field">...... The application uses View specific models for ease of model binding. public class EmployeeViewModel{ public int EmployeeID; [Required] [DisplayName("Last Name")] public string LastName { get; set; } [Required] [DisplayName("First Name")] public string FirstName { get; set; } [Required] [DisplayName("Title")] public string Title { get; set; } [Required] [DisplayName("Address")] public string Address { get; set; } [Required] [DisplayName("Salary")] [Range(500, double.MaxValue)] public decimal Salary { get; set; } public bool IsSubordinate { get; set; }} To help with displaying readonly/editable fields, we use a helper method. //Simple extension method to display a TextboxFor or DisplayFor based on the isEditable variablepublic static MvcHtmlString TextBoxOrLabelFor<TModel, TProperty>(this HtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TProperty>> expression, bool isEditable){ if (isEditable) { return htmlHelper.TextBoxFor(expression); } else { return htmlHelper.DisplayFor(expression); }} The helper method is used in the view like so: <%=Html.TextBoxOrLabelFor(model => model.Title, Model.IsSubordinate)%> As mentioned in this post, there is a much easier way to update properties on an object. Download Demo Project VS 2008, ASP.NET MVC 2 RTM Remember to change the connectionString to point to your Northwind DB NorthwindHR.zip Feedback and bugs are always welcome :-)

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Monitor your Hard Drive’s Health with Acronis Drive Monitor

    - by Matthew Guay
    Are you worried that your computer’s hard drive could die without any warning?  Here’s how you can keep tabs on it and get the first warning signs of potential problems before you actually lose your critical data. Hard drive failures are one of the most common ways people lose important data from their computers.  As more of our memories and important documents are stored digitally, a hard drive failure can mean the loss of years of work.  Acronis Drive Monitor helps you avert these disasters by warning you at the first signs your hard drive may be having trouble.  It monitors many indicators, including heat, read/write errors, total lifespan, and more. It then notifies you via a taskbar popup or email that problems have been detected.  This early warning lets you know ahead of time that you may need to purchase a new hard drive and migrate your data before it’s too late. Getting Started Head over to the Acronis site to download Drive Monitor (link below).  You’ll need to enter your name and email, and then you can download this free tool. Also, note that the download page may ask if you want to include a trial of their for-pay backup program.  If you wish to simply install the Drive Monitor utility, click Continue without adding. Run the installer when the download is finished.  Follow the prompts and install as normal. Once it’s installed, you can quickly get an overview of your hard drives’ health.  Note that it shows 3 categories: Disk problems, Acronis backup, and Critical Events.  On our computer, we had Seagate DiskWizard, an image backup utility based on Acronis Backup, installed, and Acronis detected it. Drive Monitor stays running in your tray even when the application window is closed.  It will keep monitoring your hard drives, and will alert you if there’s a problem. Find Detailed Information About Your Hard Drives Acronis’ simple interface lets you quickly see an overview of how the drives on your computer are performing.  If you’d like more information, click the link under the description.  Here we see that one of our drives have overheated, so click Show disks to get more information. Now you can select each of your drives and see more information about them.  From the Disk overview tab that opens by default, we see that our drive is being monitored, has been running for a total of 368 days, and that it’s health is good.  However, it is running at 113F, which is over the recommended max of 107F.   The S.M.A.R.T. parameters tab gives us more detailed information about our drive.  Most users wouldn’t know what an accepted value would be, so it also shows the status.  If the value is within the accepted parameters, it will report OK; otherwise, it will show that has a problem in this area. One very interesting piece of information we can see is the total number of Power-On Hours, Start/Stop Count, and Power Cycle Count.  These could be useful indicators to check if you’re considering purchasing a second hand computer.  Simply load this program, and you’ll get a better view of how long it’s been in use. Finally, the Events tab shows each time the program gave a warning.  We can see that our drive, which had been acting flaky already, is routinely overheating even when our other hard drive was running in normal temperature ranges. Monitor Acronis Backups And Critical Errors In addition to monitoring critical stats of your hard drives, Acronis Drive Monitor also keeps up with the status of your backup software and critical events reported by Windows.  You can access these from the front page, or via the links on the left hand sidebar.  If you have any edition of any Acronis Backup product installed, it will show that it was detected.  Note that it can only monitor the backup status of the newest versions of Acronis Backup and True Image. If no Acronis backup software was installed, it will show a warning that the drive may be unprotected and will give you a link to download Acronis backup software.   If you have another backup utility installed that you wish to monitor yourself, click Configure backup monitoring, and then disable monitoring on the drives you’re monitoring yourself. Finally, you can view any detected Critical events from the Critical events tab on the left. Get Emailed When There’s a Problem One of Drive Monitor’s best features is the ability to send you an email whenever there’s a problem.  Since this program can run on any version of Windows, including the Server and Home Server editions, you can use this feature to stay on top of your hard drives’ health even when you’re not nearby.  To set this up, click Options in the top left corner. Select Alerts on the left, and then click the Change settings link to setup your email account. Enter the email address which you wish to receive alerts, and a name for the program.  Then, enter the outgoing mail server settings for your email.  If you have a Gmail account, enter the following information: Outgoing mail server (SMTP): smtp.gmail.com Port: 587 Username and Password: Your gmail address and password Check the Use encryption box, and then select TLS from the encryption options.   It will now send a test message to your email account, so check and make sure it sent ok. Now you can choose to have the program automatically email you when warnings and critical alerts appear, and also to have it send regular disk status reports.   Conclusion Whether you’ve got a brand new hard drive or one that’s seen better days, knowing the real health of your it is one of the best ways to be prepared before disaster strikes.  It’s no substitute for regular backups, but can help you avert problems.  Acronis Drive Monitor is a nice tool for this, and although we wish it wasn’t so centered around their backup offerings, we still found it a nice tool. Link Download Acronis Drive Monitor (registration required) Similar Articles Productive Geek Tips Quick Tip: Change Monitor Timeout From Command LineAnalyze and Manage Hard Drive Space with WinDirStatMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersDefrag Multiple Hard Drives At Once In WindowsFind Your Missing USB Drive on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

  • Tricks and Optimizations for you Sitecore website

    - by amaniar
    When working with Sitecore there are some optimizations/configurations I usually repeat in order to make my app production ready. Following is a small list I have compiled from experience, Sitecore documentation, communicating with Sitecore Engineers etc. This is not supposed to be technically complete and might not be fit for all environments.   Simple configurations that can make a difference: 1) Configure Sitecore Caches. This is the most straight forward and sure way of increasing the performance of your website. Data and item cache sizes (/databases/database/ [id=web] ) should be configured as needed. You may start with a smaller number and tune them as needed. <cacheSizes hint="setting"> <data>300MB</data> <items>300MB</items> <paths>5MB</paths> <standardValues>5MB</standardValues> </cacheSizes> Tune the html, registry etc cache sizes for your website.   <cacheSizes> <sites> <website> <html>300MB</html> <registry>1MB</registry> <viewState>10MB</viewState> <xsl>5MB</xsl> </website> </sites> </cacheSizes> Tune the prefetch cache settings under the App_Config/Prefetch/ folder. Sample /App_Config/Prefetch/Web.Config: <configuration> <cacheSize>300MB</cacheSize> <!--preload items that use this template--> <template desc="mytemplate">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</template> <!--preload this item--> <item desc="myitem">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX }</item> <!--preload children of this item--> <children desc="childitems">{XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX}</children> </configuration> Break your page into sublayouts so you may cache most of them. Read the caching configuration reference: http://sdn.sitecore.net/upload/sitecore6/sc62keywords/cache_configuration_reference_a4.pdf   2) Disable Analytics for the Shell Site <site name="shell" virtualFolder="/sitecore/shell" physicalFolder="/sitecore/shell" rootPath="/sitecore/content" startItem="/home" language="en" database="core" domain="sitecore" loginPage="/sitecore/login" content="master" contentStartItem="/Home" enableWorkflow="true" enableAnalytics="false" xmlControlPage="/sitecore/shell/default.aspx" browserTitle="Sitecore" htmlCacheSize="2MB" registryCacheSize="3MB" viewStateCacheSize="200KB" xslCacheSize="5MB" />   3) Increase the Check Interval for the MemoryMonitorHook so it doesn’t run every 5 seconds (default). <hook type="Sitecore.Diagnostics.MemoryMonitorHook, Sitecore.Kernel"> <param desc="Threshold">800MB</param> <param desc="Check interval">00:05:00</param> <param desc="Minimum time between log entries">00:01:00</param> <ClearCaches>false</ClearCaches> <GarbageCollect>false</GarbageCollect> <AdjustLoadFactor>false</AdjustLoadFactor> </hook>   4) Set Analytics.PeformLookup (Sitecore.Analytics.config) to false if your environment doesn’t have access to the internet or you don’t intend to use reverse DNS lookup. <setting name="Analytics.PerformLookup" value="false" />   5) Set the value of the “Media.MediaLinkPrefix” setting to “-/media”: <setting name="Media.MediaLinkPrefix" value="-/media" /> Add the following line to the customHandlers section: <customHandlers> <handler trigger="-/media/" handler="sitecore_media.ashx" /> <handler trigger="~/media/" handler="sitecore_media.ashx" /> <handler trigger="~/api/" handler="sitecore_api.ashx" /> <handler trigger="~/xaml/" handler="sitecore_xaml.ashx" /> <handler trigger="~/icon/" handler="sitecore_icon.ashx" /> <handler trigger="~/feed/" handler="sitecore_feed.ashx" /> </customHandlers> Link: http://squad.jpkeisala.com/2011/10/sitecore-media-library-performance-optimization-checklist/   6) Performance counters should be disabled in production if not being monitored <setting name="Counters.Enabled" value="false" />   7) Disable Item/Memory/Timing threshold warnings. Due to the nature of this component, it brings no value in production. <!--<processor type="Sitecore.Pipelines.HttpRequest.StartMeasurements, Sitecore.Kernel" />--> <!--<processor type="Sitecore.Pipelines.HttpRequest.StopMeasurements, Sitecore.Kernel"> <TimingThreshold desc="Milliseconds">1000</TimingThreshold> <ItemThreshold desc="Item count">1000</ItemThreshold> <MemoryThreshold desc="KB">10000</MemoryThreshold> </processor>—>   8) The ContentEditor.RenderCollapsedSections setting is a hidden setting in the web.config file, which by default is true. Setting it to false will improve client performance for authoring environments. <setting name="ContentEditor.RenderCollapsedSections" value="false" />   9) Add a machineKey section to your Web.Config file when using a web farm. Link: http://msdn.microsoft.com/en-us/library/ff649308.aspx   10) If you get errors in the log files similar to: WARN Could not create an instance of the counter 'XXX.XXX' (category: 'Sitecore.System') Exception: System.UnauthorizedAccessException Message: Access to the registry key 'Global' is denied. Make sure the ApplicationPool user is a member of the system “Performance Monitor Users” group on the server.   11) Disable WebDAV configurations on the CD Server if not being used. More: http://sitecoreblog.alexshyba.com/2011/04/disable-webdav-in-sitecore.html   12) Change Log4Net settings to only log Errors on content delivery environments to avoid unnecessary logging. <root> <priority value="ERROR" /> <appender-ref ref="LogFileAppender" /> </root>   13) Disable Analytics for any content item that doesn’t add value. For example a page that redirects to another page.   14) When using Web User Controls avoid registering them on the page the asp.net way: <%@ Register Src="~/layouts/UserControls/MyControl.ascx" TagName="MyControl" TagPrefix="uc2" %> Use Sublayout web control instead – This way Sitecore caching could be leveraged <sc:Sublayout ID="ID" Path="/layouts/UserControls/MyControl.ascx" Cacheable="true" runat="server" />   15) Avoid querying for all children recursively when all items are direct children. Sitecore.Context.Database.SelectItems("/sitecore/content/Home//*"); //Use: Sitecore.Context.Database.GetItem("/sitecore/content/Home");   16) On IIS — you enable static & dynamic content compression on CM and CD More: http://technet.microsoft.com/en-us/library/cc754668%28WS.10%29.aspx   17) Enable HTTP Keep-alive and content expiration in IIS.   18) Use GUID’s when accessing items and fields instead of names or paths. Its faster and wont break your code when things get moved or renamed. Context.Database.GetItem("{324DFD16-BD4F-4853-8FF1-D663F6422DFF}") Context.Item.Fields["{89D38A8F-394E-45B0-826B-1A826CF4046D}"]; //is better than Context.Database.GetItem("/Home/MyItem") Context.Item.Fields["FieldName"]   Hope this helps.

    Read the article

  • bluetooth not working on Ubuntu 13.10

    - by iacopo
    I upgrated ubuntu from 13.4 to 13.10 and my bluetooth stopped working. When I open bluetooth I'm able to put it ON but the visibility doesn't show anything and didn't detect any device. when I: dmesg | grep Blue [ 2.046249] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.046252] usb 3-1: Manufacturer: Bluetooth v2.0 [ 15.255710] Bluetooth: Core ver 2.16 [ 15.255748] Bluetooth: HCI device and connection manager initialized [ 15.255759] Bluetooth: HCI socket layer initialized [ 15.255765] Bluetooth: L2CAP socket layer initialized [ 15.255776] Bluetooth: SCO socket layer initialized [ 20.110379] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.110386] Bluetooth: BNEP filters: protocol multicast [ 20.110400] Bluetooth: BNEP socket layer initialized [ 20.120635] Bluetooth: RFCOMM TTY layer initialized [ 20.120656] Bluetooth: RFCOMM socket layer initialized [ 20.120660] Bluetooth: RFCOMM ver 1.11 when I digit: lsusb Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0bc2:2300 Seagate RSS LLC Expansion Portable Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 0e6a:6001 Megawin Technology Co., Ltd GEMBIRD Flexible keyboard KB-109F-B-DE Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 002: ID 13ee:0001 MosArt Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub when I: hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1B:10:00:2A:EC ACL MTU: 1017:8 SCO MTU: 64:0 DOWN RX bytes:457 acl:0 sco:0 events:16 errors:0 TX bytes:68 acl:0 sco:0 commands:16 errors:0 Features: 0xff 0xff 0x8d 0xfe 0x9b 0xf9 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT when I digit: rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no when I digit: sudo gedit /etc/bluetooth/main.conf [General] # List of plugins that should not be loaded on bluetoothd startup #DisablePlugins = network,input # Default adaper name # %h - substituted for hostname # %d - substituted for adapter id Name = %h-%d # Default device class. Only the major and minor device class bits are # considered. Class = 0x000100 # How long to stay in discoverable mode before going back to non-discoverable # The value is in seconds. Default is 180, i.e. 3 minutes. # 0 = disable timer, i.e. stay discoverable forever DiscoverableTimeout = 0 # How long to stay in pairable mode before going back to non-discoverable # The value is in seconds. Default is 0. # 0 = disable timer, i.e. stay pairable forever PairableTimeout = 0 # Use some other page timeout than the controller default one # which is 16384 (10 seconds). PageTimeout = 8192 # Automatic connection for bonded devices driven by platform/user events. # If a platform plugin uses this mechanism, automatic connections will be # enabled during the interval defined below. Initially, this feature # intends to be used to establish connections to ATT channels. AutoConnectTimeout = 60 # What value should be assumed for the adapter Powered property when # SetProperty(Powered, ...) hasn't been called yet. Defaults to true InitiallyPowered = true # Remember the previously stored Powered state when initializing adapters RememberPowered = false # Use vendor id source (assigner), vendor, product and version information for # DID profile support. The values are separated by ":" and assigner, VID, PID # and version. # Possible vendor id source values: bluetooth, usb (defaults to usb) #DeviceID = bluetooth:1234:5678:abcd # Do reverse service discovery for previously unknown devices that connect to # us. This option is really only needed for qualification since the BITE tester # doesn't like us doing reverse SDP for some test cases (though there could in # theory be other useful purposes for this too). Defaults to true. ReverseServiceDiscovery = true # Enable name resolving after inquiry. Set it to 'false' if you don't need # remote devices name and want shorter discovery cycle. Defaults to 'true'. NameResolving = true # Enable runtime persistency of debug link keys. Default is false which # makes debug link keys valid only for the duration of the connection # that they were created for. DebugKeys = false # Enable the GATT functionality. Default is false EnableGatt = false when I digit: dmesg | grep Bluetooth [ 2.013041] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.013049] usb 3-1: Manufacturer: Bluetooth v2.0 [ 13.798293] Bluetooth: Core ver 2.16 [ 13.798338] Bluetooth: HCI device and connection manager initialized [ 13.798352] Bluetooth: HCI socket layer initialized [ 13.798357] Bluetooth: L2CAP socket layer initialized [ 13.798368] Bluetooth: SCO socket layer initialized [ 20.184162] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.184173] Bluetooth: BNEP filters: protocol multicast [ 20.184197] Bluetooth: BNEP socket layer initialized [ 20.238947] Bluetooth: RFCOMM TTY layer initialized [ 20.238983] Bluetooth: RFCOMM socket layer initialized [ 20.239018] Bluetooth: RFCOMM ver 1.11 When I digit: uname -a Linux casa-desktop 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux When I digit: lsmod Module Size Used by parport_pc 32701 0 rfcomm 69070 4 bnep 19564 2 ppdev 17671 0 ip6t_REJECT 12910 1 xt_hl 12521 6 ip6t_rt 13507 3 nf_conntrack_ipv6 18938 9 nf_defrag_ipv6 34616 1 nf_conntrack_ipv6 ipt_REJECT 12541 1 xt_LOG 17718 8 xt_limit 12711 11 xt_tcpudp 12884 32 xt_addrtype 12635 4 nf_conntrack_ipv4 15012 9 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 xt_conntrack 12760 18 ip6table_filter 12815 1 ip6_tables 27025 1 ip6table_filter nf_conntrack_netbios_ns 12665 0 nf_conntrack_broadcast 12589 1 nf_conntrack_netbios_ns nf_nat_ftp 12741 0 nf_nat 26653 1 nf_nat_ftp kvm_amd 59958 0 nf_conntrack_ftp 18608 1 nf_nat_ftp kvm 431315 1 kvm_amd nf_conntrack 91736 8 nf_nat_ftp,nf_conntrack_netbios_ns,nf_nat,xt_conntrack,nf_conntrack_broadcast,nf_conntrack_ftp,nf_conntrack_ipv4,nf_conntrack_ipv6 iptable_filter 12810 1 crct10dif_pclmul 14289 0 crc32_pclmul 13113 0 ip_tables 27239 1 iptable_filter snd_hda_codec_realtek 55704 1 ghash_clmulni_intel 13259 0 aesni_intel 55624 0 aes_x86_64 17131 1 aesni_intel snd_hda_codec_hdmi 41117 1 x_tables 34059 13 ip6table_filter,xt_hl,ip_tables,xt_tcpudp,xt_limit,xt_conntrack,xt_LOG,iptable_filter,ip6t_rt,ipt_REJECT,ip6_tables,xt_addrtype,ip6t_REJECT lrw 13257 1 aesni_intel snd_hda_intel 48171 5 gf128mul 14951 1 lrw glue_helper 13990 1 aesni_intel ablk_helper 13597 1 aesni_intel joydev 17377 0 cryptd 20329 3 ghash_clmulni_intel,aesni_intel,ablk_helper snd_hda_codec 188738 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel arc4 12608 2 snd_hwdep 13602 1 snd_hda_codec rt2800pci 18690 0 snd_pcm 102033 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel radeon 1402449 3 rt2800lib 79963 1 rt2800pci btusb 28267 0 rt2x00pci 13287 1 rt2800pci rt2x00mmio 13603 1 rt2800pci snd_page_alloc 18710 2 snd_pcm,snd_hda_intel rt2x00lib 55238 4 rt2x00pci,rt2800lib,rt2800pci,rt2x00mmio snd_seq_midi 13324 0 mac80211 596969 3 rt2x00lib,rt2x00pci,rt2800lib snd_seq_midi_event 14899 1 snd_seq_midi ttm 83995 1 radeon snd_rawmidi 30095 1 snd_seq_midi cfg80211 479757 2 mac80211,rt2x00lib drm_kms_helper 52651 1 radeon snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi bluetooth 371880 12 bnep,btusb,rfcomm microcode 23518 0 eeprom_93cx6 13344 1 rt2800pci snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi crc_ccitt 12707 1 rt2800lib snd_timer 29433 2 snd_pcm,snd_seq snd 69141 21 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi psmouse 97626 0 drm 296739 5 ttm,drm_kms_helper,radeon k10temp 13126 0 soundcore 12680 1 snd serio_raw 13413 0 i2c_algo_bit 13413 1 radeon i2c_piix4 22106 0 video 19318 0 mac_hid 13205 0 lp 17759 0 parport 42299 3 lp,ppdev,parport_pc hid_generic 12548 0 usbhid 53014 0 hid 105818 2 hid_generic,usbhid pata_acpi 13038 0 usb_storage 62062 1 r8169 67341 0 sdhci_pci 18985 0 sdhci 42630 1 sdhci_pci mii 13934 1 r8169 pata_atiixp 13242 0 ohci_pci 13561 0 ahci 25819 2 libahci 31898 1 ahci Someone can help me?

    Read the article

  • Wine PPA dependency problems after upgrading to 12.04

    - by Pablo
    I recently upgraded my Ubuntu up to 12.04. Untill that, i can't use synaptic at all. After trying to repair the issue through synaptics, it returns me: installArchives() failed: dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however:No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: depeNo apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already ndency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: Package wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: wine1.4-i386:i386 wine1.4 wine1.4-common wine1.4-amd64 wine playonlinux Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however: Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: Package wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured I tried the following: sudo apt-get clean sudo apt-get autoclean sudo apt-get install -f And returnsme: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: wine1.4 wine1.4-common Suggested packages: dosbox The following packages will be upgraded: wine1.4 wine1.4-common 2 upgraded, 0 newly installed, 0 to remove and 110 not upgraded. 6 not fully installed or removed. Need to get 0 B/1,126 kB of archives. After this operation, 9,216 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of wine1.4-i386:i386: wine1.4-i386:i386 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-i386:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4: wine1.4 depends on wine1.4-amd64 (= 1.4-0ubuntu4); however: Version of wine1.4-amd64 on system is 1.4.1-0ubuntu1~precise1~ppa3. wine1.4 depends on wine1.4-i386 (= 1.4-0ubuntu4); however: Package wine1.4-i386 is not installed. Version of wine1.4-i386:i386 on system is 1.4.1-0ubuntu1~precise1~ppa3. No apport report written because the error message indicates its a followup error from a previous failure. dpkg: error processing wine1.4 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-common: wine1.4-common depends on wine1.4 (= 1.4-0ubuntu4); however: Package wine1.4 is not configured yet. dpkg: error processing wine1.4-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine1.4-amd64: wine1.4-amd64 depends on wine1.4-common (= 1.4.1-0ubuntu1~precise1~ppa3); however: Version of wine1.4-common on system is 1.4-0ubuntu4. dpkg: error processing wine1.4-amd64 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of wine: wine depends on wine1.4; however: Package wine1.4 is not configured yet. dpkg: error processing wine (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of playonlinux: playonlinux depends on wine | wine-unstable; however: PackagNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already e wine is not configured yet. Package wine1.4 which provides wine is not configured yet. Package wine-unstable is not installed. dpkg: error processing playonlinux (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: wine1.4-i386:i386 wine1.4 wine1.4-common wine1.4-amd64 wine playonlinux E: Sub-process /usr/bin/dpkg returned an error code (1) Cna anyone helpme? I've been searching for a sollution for days and still can't solve it. Thanks! Pablo

    Read the article

  • What to Do When Windows Won’t Boot

    - by Chris Hoffman
    You turn on your computer one day and Windows refuses to boot — what do you do? “Windows won’t boot” is a common symptom with a variety of causes, so you’ll need to perform some troubleshooting. Modern versions of Windows are better at recovering from this sort of thing. Where Windows XP might have stopped in its tracks when faced with this problem, modern versions of Windows will try to automatically run Startup Repair. First Things First Be sure to think about changes you’ve made recently — did you recently install a new hardware driver, connect a new hardware component to your computer, or open your computer’s case and do something? It’s possible the hardware driver is buggy, the new hardware is incompatible, or that you accidentally unplugged something while working inside your computer. The Computer Won’t Power On At All If your computer won’t power on at all, ensure it’s plugged into a power outlet and that the power connector isn’t loose. If it’s a desktop PC, ensure the power switch on the back of its case — on the power supply — is set to the On position. If it still won’t power on at all, it’s possible you disconnected a power cable inside its case. If you haven’t been messing around inside the case, it’s possible the power supply is dead. In this case, you’ll have to get your computer’s hardware fixed or get a new computer. Be sure to check your computer monitor — if your computer seems to power on but your screen stays black, ensure your monitor is powered on and that the cable connecting it to your computer’s case is plugged in securely at both ends. The Computer Powers On And Says No Bootable Device If your computer is powering on but you get a black screen that says something like “no bootable device” or another sort of “disk error” message, your computer can’t seem to boot from the hard drive that Windows was installed on. Enter your computer’s BIOS or UEFI firmware setup screen and check its boot order setting, ensuring that it’s set to boot from its hard drive. If the hard drive doesn’t appear in the list at all, it’s possible your hard drive has failed and can no longer be booted from. In this case, you may want to insert Windows installation or recovery media and run the Startup Repair operation. This will attempt to make Windows bootable again. For example, if something overwrote your Windows drive’s boot sector, this will repair the boot sector. If the recovery environment won’t load or doesn’t see your hard drive, you likely have a hardware problem. Be sure to check your BIOS or UEFI’s boot order first if the recovery environment won’t load. You can also attempt to manually fix Windows boot loader problems using the fixmbr and fixboot commands. Modern versions of Windows should be able to fix this problem for you with the Startup Repair wizard, so you shouldn’t actually have to run these commands yourself. Windows Freezes or Crashes During Boot If Windows seems to start booting but fails partway through, you may be facing either a software or hardware problem. If it’s a software problem, you may be able to fix it by performing a Startup Repair operation. If you can’t do this from the boot menu, insert a Windows installation disc or recovery disk and use the startup repair tool from there. If this doesn’t help at all, you may want to reinstall Windows or perform a Refresh or Reset on Windows 8. If the computer encounters errors while attempting to perform startup repair or reinstall Windows, or the reinstall process works properly and you encounter the same errors afterwards, you likely have a hardware problem. Windows Starts and Blue Screens or Freezes If Windows crashes or blue-screens on you every time it boots, you may be facing a hardware or software problem. For example, malware or a buggy driver may be loading at boot and causing the crash, or your computer’s hardware may be malfunctioning. To test this, boot your Windows computer in safe mode. In safe mode, Windows won’t load typical hardware drivers or any software that starts automatically at startup. If the computer is stable in safe mode, try uninstalling any recently installed hardware drivers, performing a system restore, and scanning for malware. If you’re lucky, one of these steps may fix your software problem and allow you to boot Windows normally. If your problem isn’t fixed, try reinstalling Windows or performing a Refresh or Reset on Windows 8. This will reset your computer back to its clean, factory-default state. If you’re still experiencing crashes, your computer likely has a hardware problem. Recover Files When Windows Won’t Boot If you have important files that will be lost and want to back them up before reinstalling Windows, you can use a Windows installer disc or Linux live media to recover the files. These run entirely from a CD, DVD, or USB drive and allow you to copy your files to another external media, such as another USB stick or an external hard drive. If you’re incapable of booting a Windows installer disc or Linux live CD, you may need to go into your BIOS or UEFI and change the boot order setting. If even this doesn’t work — or if you can boot from the devices and your computer freezes or you can’t access your hard drive — you likely have a hardware problem. You can try pulling the computer’s hard drive, inserting it into another computer, and recovering your files that way. Following these steps should fix the vast majority of Windows boot issues — at least the ones that are actually fixable. The dark cloud that always hangs over such issues is the possibility that the hard drive or another component in the computer may be failing. Image Credit: Karl-Ludwig G. Poggemann on Flickr, Tzuhsun Hsu on Flickr     

    Read the article

  • Stuck removing ffmpeg from system due to package problems

    - by Michiel
    I'd be very grateful to get a bit of support on the following situation: Using Ubuntu 12.04, and I used the PPA of Jon Severinson for ffmpeg. I had a problem playing movies with mplayer, so I decided to remove the PPA and try to use libav only after I read this article about libav vs ffmpeg. I first removed the PPA from the list, then apt-get update, etc. But I couldn't remove ffmpeg due to dependency-errors and got stuck in what seemed some kind of loop. Then I found a suggestion here. I followed the steps, and ended up force-removing libavcodec-extra-53. Because, apparently that was "what got stuff moving again". At the moment I can't. Now Ubuntu is reporting a broken package (BrokenCount 0). Errors: De afhankelijkheden van de volgende pakketten konden niet geïnstalleerd worden: audacious-plugins: Depends: audacious-plugins-data (= 3.2.1-4ubuntu1) maar 3.2.1-4ubuntu1 is geïnstalleerd Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libcurl3-gnutls (= 7.16.2-1) maar 7.22.0-3ubuntu4 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libpulse0 (= 1:0.99.1) maar 1:1.1-0ubuntu15.1 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: libwavpack1 (= 4.40.0) maar 4.60.1-2 is geïnstalleerd Depends: libxcomposite1 (= 1:0.3-1) maar 1:0.4.3-2build1 is geïnstalleerd Depends: zlib1g (= 1:1.1.4) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd libavfilter-extra-2: Depends: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Depends: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libswresample-extra-0 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libswscale-extra-2 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libavformat-extra-53: Depends: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Depends: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libk3b6-extracodecs: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.14) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libkdecore5 (= 4:4.4.4) maar 4:4.8.4a-0ubuntu0.2 is geïnstalleerd Depends: libkio5 (= 4:4.4.4) maar 4:4.8.4a-0ubuntu0.2 is geïnstalleerd Depends: libqtcore4 (= 4:4.7.0~beta1) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libstdc++6 (= 4.1.1) maar 4.6.3-1ubuntu5 is geïnstalleerd libquicktime2: Depends: libavcodec-extra-53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8~beta2-2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libxine1-ffmpeg: Depends: libavcodec-extra-53 (= 4:0.7.3-1) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 4:0.7.3-1) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.4) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.7.3-1) maar het is niet geïnstalleerd Depends: libxine1-bin (= 1.1.20-2build1) maar 1.1.20-2build1 is geïnstalleerd mencoder: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd mplayer: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd vlc: Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) maar 2.0.3-0ubuntu0.12.04.1 is geïnstalleerd Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.15) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libfreetype6 (= 2.2.1) maar 2.4.8-1ubuntu2 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libqtcore4 (= 4:4.8.0) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libqtgui4 (= 4:4.7.0~beta1) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: zlib1g (= 1:1.2.3.3.dfsg) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd vlc-nox: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.15) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libfontconfig1 (= 2.8.0) maar 2.8.0-3ubuntu9.1 is geïnstalleerd Depends: libfreetype6 (= 2.2.1) maar 2.4.8-1ubuntu2 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libgnutls26 (= 2.12.6.1-0) maar 2.12.14-5ubuntu3.1 is geïnstalleerd Depends: libmpcdec6 (= 1:0.1~r435) maar 2:0.1~r459-1ubuntu1 is geïnstalleerd Depends: libncursesw5 (= 5.6+20070908) maar 5.9-4 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libsmbclient (= 3.0.24) maar 2:3.6.3-2ubuntu2.3 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libudev0 (= 147) maar 175-0ubuntu9.1 is geïnstalleerd Depends: libxml2 (= 2.7.4) maar 2.7.8.dfsg-5.1ubuntu4.1 is geïnstalleerd Depends: zlib1g (= 1:1.2.0.2) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd xbmc-bin: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavfilter-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd And apt-get is reporting: root@LAPTOP:~# apt-get -f install Pakketlijsten worden ingelezen... Klaar Boom van vereisten wordt opgebouwd De status informatie wordt gelezen... Klaar Vereisten worden gecorrigeerd... mislukt. De volgende pakketten hebben niet-voldane vereisten: audacious-plugins : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd libavfilter-extra-2 : Vereisten: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Vereisten: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd libavformat-extra-53 : Vereisten: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Vereisten: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd libk3b6-extracodecs : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd libquicktime2 : Vereisten: libavcodec53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd libxine1-ffmpeg : Vereisten: libavcodec53 (= 4:0.7.3-1) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.7.3-1) maar het is niet geïnstalleerd mencoder : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd mplayer : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd vlc : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd vlc-nox : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd xbmc-bin : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd E: Fout, pkgProblemResolver::Resolve maakte scheidingen aan, dit kan veroorzaakt worden door vastgehouden pakketten. E: Kan vereisten niet corrigeren How to proceed?? Thanks in advance, Michiel

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • SSIS: Deploying OLAP cubes using C# script tasks and AMO

    - by DrJohn
    As part of the continuing series on Building dynamic OLAP data marts on-the-fly, this blog entry will focus on how to automate the deployment of OLAP cubes using SQL Server Integration Services (SSIS) and Analysis Services Management Objects (AMO). OLAP cube deployment is usually done using the Analysis Services Deployment Wizard. However, this option was dismissed for a variety of reasons. Firstly, invoking external processes from SSIS is fraught with problems as (a) it is not always possible to ensure SSIS waits for the external program to terminate; (b) we cannot log the outcome properly and (c) it is not always possible to control the server's configuration to ensure the executable works correctly. Another reason for rejecting the Deployment Wizard is that it requires the 'answers' to be written into four XML files. These XML files record the three things we need to change: the name of the server, the name of the OLAP database and the connection string to the data mart. Although it would be reasonably straight forward to change the content of the XML files programmatically, this adds another set of complication and level of obscurity to the overall process. When I first investigated the possibility of using C# to deploy a cube, I was surprised to find that there are no other blog entries about the topic. I can only assume everyone else is happy with the Deployment Wizard! SSIS "forgets" assembly references If you build your script task from scratch, you will have to remember how to overcome one of the major annoyances of working with SSIS script tasks: the forgetful nature of SSIS when it comes to assembly references. Basically, you can go through the process of adding an assembly reference using the Add Reference dialog, but when you close the script window, SSIS "forgets" the assembly reference so the script will not compile. After repeating the operation several times, you will find that SSIS only remembers the assembly reference when you specifically press the Save All icon in the script window. This problem is not unique to the AMO assembly and has certainly been a "feature" since SQL Server 2005, so I am not amazed it is still present in SQL Server 2008 R2! Sample Package So let's take a look at the sample SSIS package I have provided which can be downloaded from here: DeployOlapCubeExample.zip  Below is a screenshot after a successful run. Connection Managers The package has three connection managers: AsDatabaseDefinitionFile is a file connection manager pointing to the .asdatabase file you wish to deploy. Note that this can be found in the bin directory of you OLAP database project once you have clicked the "Build" button in Visual Studio TargetOlapServerCS is an Analysis Services connection manager which identifies both the deployment server and the target database name. SourceDataMart is an OLEDB connection manager pointing to the data mart which is to act as the source of data for your cube. This will be used to replace the connection string found in your .asdatabase file Once you have configured the connection managers, the sample should run and deploy your OLAP database in a few seconds. Of course, in a production environment, these connection managers would be associated with package configurations or set at runtime. When you run the sample, you should see that the script logs its activity to the output screen (see screenshot above). If you configure logging for the package, then these messages will also appear in your SSIS logging. Sample Code Walkthrough Next let's walk through the code. The first step is to parse the connection string provided by the TargetOlapServerCS connection manager and obtain the name of both the target OLAP server and also the name of the OLAP database. Note that the target database does not have to exist to be referenced in an AS connection manager, so I am using this as a convenient way to define both properties. We now connect to the server and check for the existence of the OLAP database. If it exists, we drop the database so we can re-deploy. svr.Connect(olapServerName); if (svr.Connected) { // Drop the OLAP database if it already exists Database db = svr.Databases.FindByName(olapDatabaseName); if (db != null) { db.Drop(); } // rest of script } Next we start building the XMLA command that will actually perform the deployment. Basically this is a small chuck of XML which we need to wrap around the large .asdatabase file generated by the Visual Studio build process. // Start generating the main part of the XMLA command XmlDocument xmlaCommand = new XmlDocument(); xmlaCommand.LoadXml(string.Format("<Batch Transaction='false' xmlns='http://schemas.microsoft.com/analysisservices/2003/engine'><Alter AllowCreate='true' ObjectExpansion='ExpandFull'><Object><DatabaseID>{0}</DatabaseID></Object><ObjectDefinition/></Alter></Batch>", olapDatabaseName));  Next we need to merge two XML files which we can do by simply using setting the InnerXml property of the ObjectDefinition node as follows: // load OLAP Database definition from .asdatabase file identified by connection manager XmlDocument olapCubeDef = new XmlDocument(); olapCubeDef.Load(Dts.Connections["AsDatabaseDefinitionFile"].ConnectionString); // merge the two XML files by obtain a reference to the ObjectDefinition node oaRootNode.InnerXml = olapCubeDef.InnerXml;   One hurdle I had to overcome was removing detritus from the .asdabase file left by the Visual Studio build. Through an iterative process, I found I needed to remove several nodes as they caused the deployment to fail. The XMLA error message read "Cannot set read-only node: CreatedTimestamp" or similar. In comparing the XMLA generated with by the Deployment Wizard with that generated by my code, these read-only nodes were missing, so clearly I just needed to strip them out. This was easily achieved using XPath to find the relevant XML nodes, of which I show one example below: foreach (XmlNode node in rootNode.SelectNodes("//ns1:CreatedTimestamp", nsManager)) { node.ParentNode.RemoveChild(node); } Now we need to change the database name in both the ID and Name nodes using code such as: XmlNode databaseID = xmlaCommand.SelectSingleNode("//ns1:Database/ns1:ID", nsManager); if (databaseID != null) databaseID.InnerText = olapDatabaseName; Finally we need to change the connection string to point at the relevant data mart. Again this is easily achieved using XPath to search for the relevant nodes and then replace the content of the node with the new name or connection string. XmlNode connectionStringNode = xmlaCommand.SelectSingleNode("//ns1:DataSources/ns1:DataSource/ns1:ConnectionString", nsManager); if (connectionStringNode != null) { connectionStringNode.InnerText = Dts.Connections["SourceDataMart"].ConnectionString; } Finally we need to perform the deployment using the Execute XMLA command and check the returned XmlaResultCollection for errors before setting the Dts.TaskResult. XmlaResultCollection oResults = svr.Execute(xmlaCommand.InnerXml);  // check for errors during deployment foreach (Microsoft.AnalysisServices.XmlaResult oResult in oResults) { foreach (Microsoft.AnalysisServices.XmlaMessage oMessage in oResult.Messages) { if ((oMessage.GetType().Name == "XmlaError")) { FireError(oMessage.Description); HadError = true; } } } If you are not familiar with XML programming, all this may all seem a bit daunting, but perceiver as the sample code is pretty short. If you would like the script to process the OLAP database, simply uncomment the lines in the vicinity of Process method. Of course, you can extend the script to perform your own custom processing and to even synchronize the database to a front-end server. Personally, I like to keep the deployment and processing separate as the code can become overly complex for support staff.If you want to know more, come see my session at the forthcoming SQLBits conference.

    Read the article

  • How can I remove old kernels/install new ones when /boot is full?

    - by Marcel
    I know this question is asked many times before, however with me it is just a bit different I guess. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 224G 5.2G 208G 3% / udev 1.9G 4.0K 1.9G 1% /dev tmpfs 777M 260K 777M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/sda2 90M 88M 0 100% /boot /dev/sda6 1.9G 514M 1.3G 29% /tmp My boot partition is full. Current Kernel: # uname -r 3.2.0-35-generic All Kernels: # dpkg --list | grep linux-image ii linux-image-3.2.0-32-generic 3.2.0-32.51 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-34-generic 3.2.0-34.53 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-35-generic 3.2.0-35.55 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-37-generic 3.2.0-37.58 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-38-generic 3.2.0-38.60 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iU linux-image-generic 3.2.0.37.45 Generic Linux kernel image So I thought of removing the 3.2.0.32-generic kernel with: # sudo apt-get purge linux-image-3.2.0-32-generic Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.37.45) but 3.2.0.38.46 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). No success. When I try apt-get -f install it also fails: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-34 linux-headers-3.2.0-35 linux-headers-3.2.0-34-generic linux-headers-3.2.0-35-generic Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-generic linux-image-generic The following packages will be upgraded: linux-generic linux-image-generic 2 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 5 not fully installed or removed. Need to get 0 B/4,334 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.99ubuntu13.1) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-37-generic (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-38-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic /boot/vmlinuz-3.2.0-37-generic update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-37-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-38-generic (3.2.0-38.60) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-38-generic /boot/vmlinuz-3.2.0-38-generic update-initramfs: Generating /boot/initrd.img-3.2.0-38-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-38-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-38-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-38-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-37-generic; however: Package linux-image-3.2.0-37-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Package linux-image-generic is not configured yet. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already update-initramfs: Generating /boot/initrd.img-3.2.0-35-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-35-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-37-generic linux-image-3.2.0-38-generic linux-image-generic linux-generic initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Any help would really be appreciated. Update: I did: sudo rm /boot/*-3.2.0-32-generic /boot/*-3.2.0-34-generic After that the following problem with apt-get -f install: root@localhost:/# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic The following packages will be upgraded: linux-generic 1 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,722 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Version of linux-image-generic on system is 3.2.0.38.46. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Can't remove burg theme packages

    - by Lassi
    Today after trying to install and remove BURG and few themes I faced an issue. Now I can't install or remove anything. Here is the output (unfortunately partly in Finnish, I couldn't change language since it also seems to depend on package listings: lassi@lassi-ubuntu:~$ sudo apt-get autoremove Luetaan pakettiluetteloita... Valmis Muodostetaan riippuvuussuhteiden puu Luetaan tilatietoja... Valmis Seuraavat paketit POISTETAAN: burg-theme-fortune burg-theme-gnome burg-theme-picchio 0 päivitetty, 0 uutta asennusta, 3 poistettavaa ja 0 päivittämätöntä. 3 ei asennettu kokonaan tai poistettiin. Toiminnon jälkeen vapautuu 7 180 k t levytilaa. Haluatko jatkaa [K/e]? k (Luetaan tietokantaa... 166462 files and directories currently installed.) Poistetaan pakettia burg-theme-fortune... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-fortune (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-gnome... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-gnome (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-picchio... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-picchio (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Käsittelyssä tapahtui liian monta virhettä: burg-theme-fortune burg-theme-gnome burg-theme-picchio E: Sub-process /usr/bin/dpkg returned an error code (1) Basically what seems to happen is this: It creates the package lists, then tries to remove packet burg-theme-fortune. This fails as update-burg command was not found. Then dpkg reports an error while processing the packet. Same goes with all 3 packages. In the end it claims that there were too many errors, and packages stay installed. I also tried installing burg as it tries to run command update-burg, but appears that it tries to delete these packages always when I try to install or remove or do anything with apt. Any ideas how I could solve this issue? Edit: Here is the output of apt-get install burg (tried installing again to get English output) lassi@lassi-ubuntu:~$ LC_ALL=C sudo apt-get install burg [sudo] password for lassi: Reading package lists... Done Building dependency tree Reading state information... Done burg is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/6169 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 167497 files and directories currently installed.) Preparing to replace burg-theme-fortune 0.5.0-1 (using .../burg-theme-fortune_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-fortune ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-gnome 0.5.0-1 (using .../burg-theme-gnome_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-gnome ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-picchio 0.5.0-1 (using .../burg-theme-picchio_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-picchio ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) lassi@lassi-ubuntu:~$

    Read the article

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • How to Identify Which Hardware Component is Failing in Your Computer

    - by Chris Hoffman
    Concluding that your computer has a hardware problem is just the first step. If you’re dealing with a hardware issue and not a software issue, the next step is determining what hardware problem you’re actually dealing with. If you purchased a laptop or pre-built desktop PC and it’s still under warranty, you don’t need to care about this. Have the manufacturer fix the PC for you — figuring it out is their problem. If you’ve built your own PC or you want to fix a computer that’s out of warranty, this is something you’ll need to do on your own. Blue Screen 101: Search for the Error Message This may seem like obvious advice, but searching for information about a blue screen’s error message can help immensely. Most blue screens of death you’ll encounter on modern versions of Windows will likely be caused by hardware failures. The blue screen of death often displays information about the driver that crashed or the type of error it encountered. For example, let’s say you encounter a blue screen that identified “NV4_disp.dll” as the driver that caused the blue screen. A quick Google search will reveal that this is the driver for NVIDIA graphics cards, so you now have somewhere to start. It’s possible that your graphics card is failing if you encounter such an error message. Check Hard Drive SMART Status Hard drives have a built in S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) feature. The idea is that the hard drive monitors itself and will notice if it starts to fail, providing you with some advance notice before the drive fails completely. This isn’t perfect, so your hard drive may fail even if SMART says everything is okay. If you see any sort of “SMART error” message, your hard drive is failing. You can use SMART analysis tools to view the SMART health status information your hard drives are reporting. Test Your RAM RAM failure can result in a variety of problems. If the computer writes data to RAM and the RAM returns different data because it’s malfunctioning, you may see application crashes, blue screens, and file system corruption. To test your memory and see if it’s working properly, use Windows’ built-in Memory Diagnostic tool. The Memory Diagnostic tool will write data to every sector of your RAM and read it back afterwards, ensuring that all your RAM is working properly. Check Heat Levels How hot is is inside your computer? Overheating can rsult in blue screens, crashes, and abrupt shut downs. Your computer may be overheating because you’re in a very hot location, it’s ventilated poorly, a fan has stopped inside your computer, or it’s full of dust. Your computer monitors its own internal temperatures and you can access this information. It’s generally available in your computer’s BIOS, but you can also view it with system information utilities such as SpeedFan or Speccy. Check your computer’s recommended temperature level and ensure it’s within the appropriate range. If your computer is overheating, you may see problems only when you’re doing something demanding, such as playing a game that stresses your CPU and graphics card. Be sure to keep an eye on how hot your computer gets when it performs these demanding tasks, not only when it’s idle. Stress Test Your CPU You can use a utility like Prime95 to stress test your CPU. Such a utility will fore your computer’s CPU to perform calculations without allowing it to rest, working it hard and generating heat. If your CPU is becoming too hot, you’ll start to see errors or system crashes. Overclockers use Prime95 to stress test their overclock settings — if Prime95 experiences errors, they throttle back on their overclocks to ensure the CPU runs cooler and more stable. It’s a good way to check if your CPU is stable under load. Stress Test Your Graphics Card Your graphics card can also be stress tested. For example, if your graphics driver crashes while playing games, the games themselves crash, or you see odd graphical corruption, you can run a graphics benchmark utility like 3DMark. The benchmark will stress your graphics card and, if it’s overheating or failing under load, you’ll see graphical problems, crashes, or blue screens while running the benchmark. If the benchmark seems to work fine but you have issues playing a certain game, it may just be a problem with that game. Swap it Out Not every hardware problem is easy to diagnose. If you have a bad motherboard or power supply, their problems may only manifest through occasional odd issues with other components. It’s hard to tell if these components are causing problems unless you replace them completely. Ultimately, the best way to determine whether a component is faulty is to swap it out. For example, if you think your graphics card may be causing your computer to blue screen, pull the graphics card out of your computer and swap in a new graphics card. If everything is working well, it’s likely that your previous graphics card was bad. This isn’t easy for people who don’t have boxes of components sitting around, but it’s the ideal way to troubleshoot. Troubleshooting is all about trial and error, and swapping components out allows you to pin down which component is actually causing the problem through a process of elimination. This isn’t a complete guide to everything that could likely go wrong and how to identify it — someone could write a full textbook on identifying failing components and still not cover everything. But the tips above should give you some places to start dealing with the more common problems. Image Credit: Justin Marty on Flickr     

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • krb5-multidev, libk5crypto3, libk5crypto3:i386 package dependency

    - by TDalton
    Using Ubuntu 12.04 Asus U43F I can no longer update, install or remove packages via software center because of package dependency errors. sudo apt-get install -f reads the following: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libunity6 libqapt-runtime libboost-program-options1.46.1 akonadi-backend-mysql libqapt1 shared-desktop-ontologies libntrack0 ntrack-module-libnl-0 libntrack-qt4-1 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: krb5-multidev libk5crypto3:i386 libkrb5-dev Suggested packages: krb5-doc krb5-doc:i386 krb5-user:i386 The following packages will be upgraded: krb5-multidev libk5crypto3:i386 libkrb5-dev 3 upgraded, 0 newly installed, 0 to remove and 325 not upgraded. 11 not fully installed or removed. Need to get 0 B/213 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: error processing libk5crypto3 (--configure): libk5crypto3:amd64 1.10+dfsg~beta1-2ubuntu0.3 cannot be configured because libk5crypto3:i386 is in a different version (1.10+dfsg~beta1-2ubuntu0.1) dpkg: error processing libk5crypto3:i386 (--configure): libk5crypto3:i386 1.10+dfsg~beta1-2ubuntu0.1 cannot be configured because libk5crypto3:amd64 is in a different version (1.10+dfsg~beta1-2ubuntu0.3) dpkg: dependency problems prevent configuration of libkrb5-3: libkrb5-3 depends on libk5crypto3 (>= 1.9+dfsg~beta1); however:No apport report written because MaxReports is reached already Package libk5crypto3 is not configured yet. dpkg: error processing libkrb5-3 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssapi-krb5-2: libgssapi-krb5-2 depends on libk5crypto3 (>= 1.8+dfsg); however: Package libk5crypto3 is not configured yet. libgssapi-krb5-2 depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.3); however: Package libkrb5-3 is not configured yet. dpkg: error processing libgssapi-krb5-2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssrpc4: libgssrpc4 depends on libgssapi-krb5-2 (>= 1.10+dfsg~); however: Package libgssapi-krb5-2 is not configured yet. dpkg: error processing libgssrpc4 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkadm5srv-mit8: libkadm5srv-mit8 depends on libgssapi-krb5-2 (>= 1.6.dfsg.2); however: Package libgssapi-krb5-2 is not configured yet. libkadm5srv-mit8 depends on libgssrpc4 (>= 1.6.dfsg.2); however: Package libgssrpc4 is not configured yet. libkadm5srv-mit8 depends on libk5crypto3 (>= 1.6.dfsg.2); however: Package libk5crypto3 is not configured yet. libkadm5srv-mit8 depends on libkrb5-3 (>= 1.9+dfsg~beta1); however: Package libkrb5-3 is not configured yet. dpkg: error processing libkadm5srv-mit8 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libkadm5clnt-mit8: libkadm5clnt-mit8 depends on libgssapi-krb5-2 (>= 1.10+dfsg~); however: Package libgssapi-krb5-2 is not configured yet. libkadm5clnt-mit8 depends on libgssrpc4 (>= 1.6.dfsg.2); however: Package libgssrpc4 is not configured yet.No apport report written because MaxReports is reached already libkadm5clnt-mit8 depends on libk5crypto3 (>= 1.6.dfsg.2); however: Package libk5crypto3 is not configured yet. libkadm5clnt-mit8 depends on libkrb5-3 (>= 1.8+dfsg); however: Package libkrb5-3 is not configured yet. dpkg: error processing libkadm5clnt-mit8 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of krb5-multidev: krb5-multidev depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkrb5-3 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libk5crypto3 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libk5crypto3 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libgssapi-krb5-2 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libgssapi-krb5-2 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libgssrpc4 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libgssrpc4 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libkadm5srv-mit8 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkadm5srv-mit8 on system is 1.10+dfsg~beta1-2ubuntu0.3. krb5-multidev depends on libkadm5clnt-mit8 (= 1.10+dfsg~beta1-2ubuntu0.2); however: Version of libkadm5clnt-mit8 on system is 1.10+dfsg~beta1-2ubuntu0.3. dpkg: error processing krb5-multidev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkrb5-dev: libkrb5-dev depends on krb5-multidev (= 1.10+dfsg~beta1-2ubuntu0.2); however: Package krb5-multidev is not configured yet. dpkg: error processing libkrb5-dev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of libkrb5-3:i386: libkrb5-3:i386 depends on libk5crypto3 (>= 1.9+dfsg~beta1); however: Package libk5crypto3:i386 is not configured yet. dpkg: error processing libkrb5-3:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libgssapi-krb5-2:i386: libgssapi-krb5-2:i386 depends on libk5crypto3 (>= 1.8+dfsg); however: Package libk5crypto3:i386 is not configured yet. libgssapi-krb5-2:i386 depends on libkrb5-3 (= 1.10+dfsg~beta1-2ubuntu0.3); however: Package libkrb5-3:i386 is not configured yet. dpkg: error processing libgssapi-krb5-2:i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libk5crypto3 libk5crypto3:i386 libkrb5-3 libgssapi-krb5-2 libgssrpc4 libkadm5srv-mit8 libkadm5clnt-mit8 krb5-multidev libkrb5-dev libkrb5-3:i386 libgssapi-krb5-2:i386 E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried to fix the broken dependencies via synaptic package manager, but it returns with an error: E: libk5crypto3: libk5crypto3:amd64 1.10+dfsg~beta1-2ubuntu0.3 cannot be configured because libk5crypto3 E: libkrb5-3: dependency problems - leaving unconfigured E: libgssapi-krb5-2: dependency problems - leaving unconfigured E: libgssrpc4: dependency problems - leaving unconfigured E: libkadm5srv-mit8: dependency problems - leaving unconfigured E: libkadm5clnt-mit8: dependency problems - leaving unconfigured E: krb5-multidev: dependency problems - leaving unconfigured E: libkrb5-dev: dependency problems - leaving unconfigured I haven't gotten help from ubuntuforums.org on this issue. Please help, obi-wan

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • Templated << friend not working when in interrelationship with other templated union types

    - by Dwight
    While working on my basic vector library, I've been trying to use a nice syntax for swizzle-based printing. The problem occurs when attempting to print a swizzle of a different dimension than the vector in question. In GCC 4.0, I originally had the friend << overloaded functions (with a body, even though it duplicated code) for every dimension in each vector, which caused the code to work, even if the non-native dimension code never actually was called. This failed in GCC 4.2. I recently realized (silly me) that only the function declaration was needed, not the body of the code, so I did that. Now I get the same warning on both GCC 4.0 and 4.2: LINE 50 warning: friend declaration 'std::ostream& operator<<(std::ostream&, const VECTOR3<TYPE>&)' declares a non-template function Plus the five identical warnings more for the other function declarations. The below example code shows off exactly what's going on and has all code necessary to reproduce the problem. #include <iostream> // cout, endl #include <sstream> // ostream, ostringstream, string using std::cout; using std::endl; using std::string; using std::ostream; // Predefines template <typename TYPE> union VECTOR2; template <typename TYPE> union VECTOR3; template <typename TYPE> union VECTOR4; typedef VECTOR2<float> vec2; typedef VECTOR3<float> vec3; typedef VECTOR4<float> vec4; template <typename TYPE> union VECTOR2 { private: struct { TYPE x, y; } v; struct s1 { protected: TYPE x, y; }; struct s2 { protected: TYPE x, y; }; struct s3 { protected: TYPE x, y; }; struct s4 { protected: TYPE x, y; }; struct X : s1 { operator TYPE() const { return s1::x; } }; struct XX : s2 { operator VECTOR2<TYPE>() const { return VECTOR2<TYPE>(s2::x, s2::x); } }; struct XXX : s3 { operator VECTOR3<TYPE>() const { return VECTOR3<TYPE>(s3::x, s3::x, s3::x); } }; struct XXXX : s4 { operator VECTOR4<TYPE>() const { return VECTOR4<TYPE>(s4::x, s4::x, s4::x, s4::x); } }; public: VECTOR2() {} VECTOR2(const TYPE& x, const TYPE& y) { v.x = x; v.y = y; } X x; XX xx; XXX xxx; XXXX xxxx; // Overload for cout friend ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString) { os << "(" << toString.v.x << ", " << toString.v.y << ")"; return os; } friend ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString); friend ostream& operator<<(ostream& os, const VECTOR4<TYPE>& toString); }; template <typename TYPE> union VECTOR3 { private: struct { TYPE x, y, z; } v; struct s1 { protected: TYPE x, y, z; }; struct s2 { protected: TYPE x, y, z; }; struct s3 { protected: TYPE x, y, z; }; struct s4 { protected: TYPE x, y, z; }; struct X : s1 { operator TYPE() const { return s1::x; } }; struct XX : s2 { operator VECTOR2<TYPE>() const { return VECTOR2<TYPE>(s2::x, s2::x); } }; struct XXX : s3 { operator VECTOR3<TYPE>() const { return VECTOR3<TYPE>(s3::x, s3::x, s3::x); } }; struct XXXX : s4 { operator VECTOR4<TYPE>() const { return VECTOR4<TYPE>(s4::x, s4::x, s4::x, s4::x); } }; public: VECTOR3() {} VECTOR3(const TYPE& x, const TYPE& y, const TYPE& z) { v.x = x; v.y = y; v.z = z; } X x; XX xx; XXX xxx; XXXX xxxx; // Overload for cout friend ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString) { os << "(" << toString.v.x << ", " << toString.v.y << ", " << toString.v.z << ")"; return os; } friend ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString); friend ostream& operator<<(ostream& os, const VECTOR4<TYPE>& toString); }; template <typename TYPE> union VECTOR4 { private: struct { TYPE x, y, z, w; } v; struct s1 { protected: TYPE x, y, z, w; }; struct s2 { protected: TYPE x, y, z, w; }; struct s3 { protected: TYPE x, y, z, w; }; struct s4 { protected: TYPE x, y, z, w; }; struct X : s1 { operator TYPE() const { return s1::x; } }; struct XX : s2 { operator VECTOR2<TYPE>() const { return VECTOR2<TYPE>(s2::x, s2::x); } }; struct XXX : s3 { operator VECTOR3<TYPE>() const { return VECTOR3<TYPE>(s3::x, s3::x, s3::x); } }; struct XXXX : s4 { operator VECTOR4<TYPE>() const { return VECTOR4<TYPE>(s4::x, s4::x, s4::x, s4::x); } }; public: VECTOR4() {} VECTOR4(const TYPE& x, const TYPE& y, const TYPE& z, const TYPE& w) { v.x = x; v.y = y; v.z = z; v.w = w; } X x; XX xx; XXX xxx; XXXX xxxx; // Overload for cout friend ostream& operator<<(ostream& os, const VECTOR4& toString) { os << "(" << toString.v.x << ", " << toString.v.y << ", " << toString.v.z << ", " << toString.v.w << ")"; return os; } friend ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString); friend ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString); }; // Test code int main (int argc, char * const argv[]) { vec2 my2dVector(1, 2); cout << my2dVector.x << endl; cout << my2dVector.xx << endl; cout << my2dVector.xxx << endl; cout << my2dVector.xxxx << endl; vec3 my3dVector(3, 4, 5); cout << my3dVector.x << endl; cout << my3dVector.xx << endl; cout << my3dVector.xxx << endl; cout << my3dVector.xxxx << endl; vec4 my4dVector(6, 7, 8, 9); cout << my4dVector.x << endl; cout << my4dVector.xx << endl; cout << my4dVector.xxx << endl; cout << my4dVector.xxxx << endl; return 0; } The code WORKS and produces the correct output, but I prefer warning free code whenever possible. I followed the advice the compiler gave me (summarized here and described by forums and StackOverflow as the answer to this warning) and added the two things that supposedly tells the compiler what's going on. That is, I added the function definitions as non-friends after the predefinitions of the templated unions: template <typename TYPE> ostream& operator<<(ostream& os, const VECTOR2<TYPE>& toString); template <typename TYPE> ostream& operator<<(ostream& os, const VECTOR3<TYPE>& toString); template <typename TYPE> ostream& operator<<(ostream& os, const VECTOR4<TYPE>& toString); And, to each friend function that causes the issue, I added the <> after the function name, such as for VECTOR2's case: friend ostream& operator<< <> (ostream& os, const VECTOR3<TYPE>& toString); friend ostream& operator<< <> (ostream& os, const VECTOR4<TYPE>& toString); However, doing so leads to errors, such as: LINE 139: error: no match for 'operator<<' in 'std::cout << my2dVector.VECTOR2<float>::xxx' What's going on? Is it something related to how these templated union class-like structures are interrelated, or is it due to the unions themselves? Update After rethinking the issues involved and listening to the various suggestions of Potatoswatter, I found the final solution. Unlike just about every single cout overload example on the internet, I don't need access to the private member information, but can use the public interface to do what I wish. So, I make a non-friend overload functions that are inline for the swizzle parts that call the real friend overload functions. This bypasses the issues the compiler has with templated friend functions. I've added to the latest version of my project. It now works on both versions of GCC I tried with no warnings. The code in question looks like this: template <typename SWIZZLE> inline typename EnableIf< Is2D< typename SWIZZLE::PARENT >, ostream >::type& operator<<(ostream& os, const SWIZZLE& printVector) { os << (typename SWIZZLE::PARENT(printVector)); return os; } template <typename SWIZZLE> inline typename EnableIf< Is3D< typename SWIZZLE::PARENT >, ostream >::type& operator<<(ostream& os, const SWIZZLE& printVector) { os << (typename SWIZZLE::PARENT(printVector)); return os; } template <typename SWIZZLE> inline typename EnableIf< Is4D< typename SWIZZLE::PARENT >, ostream >::type& operator<<(ostream& os, const SWIZZLE& printVector) { os << (typename SWIZZLE::PARENT(printVector)); return os; }

    Read the article

  • Sudo apt-get update -f does not work?

    - by BrianO09
    I am a bit of a noob with Linux. Several months ago I updated to Ubuntu 12.04, then stopped using Ubuntu for a while for a variety of reasons. Now I would like to go back to it, but I have a couple of problems. For one thing, the Software Center will simply not load. I click on the icon, the program comes up, but it never loads, and when I close it I get a "window not responding" message. While reading some threads to fix this issue, the common theme was that the main solution was to update by running: sudo apt-get install --reinstall software-center However, when I run that, I get the following (long): bcoleary@ubuntu:~$ sudo apt-get install --reinstall software-center [sudo] password for bcoleary: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: kdelibs-bin : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed kdelibs5-plugins : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkntlm4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed kdoctools : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkcmutils4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkde3support4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkpty4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkdeclarative5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkdewebkit5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkdnssd4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkemoticons4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkfile4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkhtml5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkidletime4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkio5 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkjsembed4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkjsapi4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkmediaplayer4 : Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libknewstuff3-4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libknotifyconfig4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkparts4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libkrosscore4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libktexteditor4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libnepomuk4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libnepomukquery4a : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libnepomukutils4 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed libplasma3 : Depends: libkdecore5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libkdeui5 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed Depends: libthreadweaver4 (= 4:4.8.3-0ubuntu0.1) but 4:4.8.5-0ubuntu0.1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So the next thing I tried was: sudo apt-get -f install The following has been cut down, but you get the idea: Errors were encountered while processing: libkdeclarative5 libkcmutils4 libnepomuk4 libkio5 libnepomukquery4a libnepomukutils4 libkparts4 libkdewebkit5 libkdnssd4 libknewstuff3-4 libplasma3 libnepomuksync4 libkemoticons4 libkfile4 libktexteditor4 libkhtml5 libkidletime4 libkmediaplayer4 libknotifyconfig4 libnepomukdatamanagement4 libkde3support4 libkjsembed4 libkrosscore4 kdoctools kdelibs-bin libkatepartinterfaces4 katepart kdelibs5-plugins plasma-scriptengine-javascript kde-runtime amarok libkdcraw20 libkgeomap1 libkipi8 libkvkontakte1 kipi-plugins digikam libkonq-common libkonq5abi1 dolphin kde-baseapps-bin kdebase-runtime libkcddb4 kdemultimedia-kio-plugins kdepimlibs-kio-plugins libkonqsidebarplugin4a konqueror konqueror-nsplugins libakonadi-kde4 libakonadi-calendar4 libkabc4 Processing was halted because there were too many errors. E: Sub-process /usr/bin/dpkg returned an error code (1) Basically it said a ton of stuff was missing. Maybe this happened when I upgraded, I am not sure. Is there a way to fix this? And if not what is the best way to un-install and re-install Ubuntu? It is currently dual-booted with Windows 7. If you need anymore info, please let me know. Thank you for helping a beginner! :)

    Read the article

  • Adding multiple data importers support to web applications

    - by DigiMortal
    I’m building web application for customer and there is requirement that users must be able to import data in different formats. Today we will support XLSX and ODF as import formats and some other formats are waiting. I wanted to be able to add new importers on the fly so I don’t have to deploy web application again when I add new importer or change some existing one. In this posting I will show you how to build generic importers support to your web application. Importer interface All importers we use must have something in common so we can easily detect them. To keep things simple I will use interface here. public interface IMyImporter {     string[] SupportedFileExtensions { get; }     ImportResult Import(Stream fileStream, string fileExtension); } Our interface has the following members: SupportedFileExtensions – string array of file extensions that importer supports. This property helps us find out what import formats are available and which importer to use with given format. Import – method that does the actual importing work. Besides file we give in as stream we also give file extension so importer can decide how to handle the file. It is enough to get started. When building real importers I am sure you will switch over to abstract base class. Importer class Here is sample importer that imports data from Excel and Word documents. Importer class with no implementation details looks like this: public class MyOpenXmlImporter : IMyImporter {     public string[] SupportedFileExtensions     {         get { return new[] { "xlsx", "docx" }; }     }     public ImportResult Import(Stream fileStream, string extension)     {         // ...     } } Finding supported import formats in web application Now we have importers created and it’s time to add them to web application. Usually we have one page or ASP.NET MVC controller where we need importers. To this page or controller we add the following method that uses reflection to find all classes that implement our IMyImporter interface. private static string[] GetImporterFileExtensions() {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;       var extensions = new Collection<string>();     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           foreach (var extension in instance.SupportedFileExtensions)         {             if (extensions.Contains(extension))                 continue;               extensions.Add(extension);         }     }       return extensions.ToArray(); } This code doesn’t look nice and is far from optimal but it works for us now. It is possible to improve performance of web application if we cache extensions and their corresponding types to some static dictionary. We have to fill it only once because our application is restarted when something changes in bin folder. Finding importer by extension When user uploads file we need to detect the extension of file and find the importer that supports given extension. We add another method to our page or controller that uses reflection to return us importer instance or null if extension is not supported. private static IMyImporter GetImporterForExtension(string extensionToFind) {     var types = from a in AppDomain.CurrentDomain.GetAssemblies()                 from t in a.GetTypes()                 where t.GetInterfaces().Contains(typeof(IMyImporter))                 select t;     foreach (var type in types)     {         var instance = (IMyImporter)type.InvokeMember(null,                        BindingFlags.CreateInstance, null, null, null);           if (instance.SupportedFileExtensions.Contains(extensionToFind))         {             return instance;         }     }       return null; } Here is example ASP.NET MVC controller action that accepts uploaded file, finds importer that can handle file and imports data. Again, this is sample code I kept minimal to better illustrate how things work. public ActionResult Import(MyImporterModel model) {     var file = Request.Files[0];     var extension = Path.GetExtension(file.FileName).ToLower();     var importer = GetImporterForExtension(extension.Substring(1));     var result = importer.Import(file.InputStream, extension);     if (result.Errors.Count > 0)     {         foreach (var error in result.Errors)             ModelState.AddModelError("file", error);           return Import();     }     return RedirectToAction("Index"); } Conclusion That’s it. Using couple of ugly methods and one simple interface we were able to add importers support to our web application. Example code here is not perfect but it works. It is possible to cache mappings between file extensions and importer types to some static variable because changing of these mappings means that something is changed in bin folder of web application and web application is restarted in this case anyway.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >