Search Results

Search found 1682 results on 68 pages for 'tron legacy'.

Page 13/68 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • kubuntu muon package manager stop working

    - by aseed
    i have kubuntu today after updating the muon package manager stuck at 64% so i closes it. and after that when i try to update or reinstall or install software the manger stuck. so how can i reinstall the muon package manger from terminal?? i try sudo apt-get install muon and i get this messege Reading package lists... Done Building dependency tree Reading state information... Done muon is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libopencv-dev : Depends: libopencv-core-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-ml-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-imgproc-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-video-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-objdetect-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-gpu-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-highgui-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-calib3d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-flann-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-features2d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-legacy-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-contrib-dev (= 2.3.1-4ppa1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). so what to do, i need to reinstall it because it not working ~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of libopencv-dev: libopencv-dev depends on libopencv-core-dev (= 2.3.1-4ppa1); however: Package libopencv-core-dev is not installed. libopencv-dev depends on libopencv-ml-dev (= 2.3.1-4ppa1); however: Package libopencv-ml-dev is not installed. libopencv-dev depends on libopencv-imgproc-dev (= 2.3.1-4ppa1); however: Package libopencv-imgproc-dev is not installed. libopencv-dev depends on libopencv-video-dev (= 2.3.1-4ppa1); however: Package libopencv-video-dev is not installed. libopencv-dev depends on libopencv-objdetect-dev (= 2.3.1-4ppa1); however: Package libopencv-objdetect-dev is not installed. libopencv-dev depends on libopencv-gpu-dev (= 2.3.1-4ppa1); however: Package libopencv-gpu-dev is not installed. libopencv-dev depends on libopencv-highgui-dev (= 2.3.1-4ppa1); however: Package libopencv-highgui-dev is not installed. libopencv-dev depends on libopencv-calib3d-dev (= 2.3.1-4ppa1); however: Package libopencv-calib3d-dev is not installed. libopencv-dev depends on libopencv-flann-dev (= 2.3.1-4ppa1); however: Package libopencv-flann-dev is not installed. libopencv-dev depends on libopencv-features2d-dev (= 2.3.1-4ppa1); however: Package libopencv-features2d-dev is not installed. libopencv-dev depends on libopencv-legacy-dev (= 2.3.1-4ppa1); however: Package libopencv-legacy-dev is not installed. libopencv-dev depends on libopencv-contrib-dev (= 2.3.1-4ppa1); however: Package libopencv-contrib-dev is not installed. dpkg: error processing libopencv-dev (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libopencv-dev sudo apt-get install -f sudo dpkg --configure -a and still same problem... and i think getting this problem because of updating kubuntu today

    Read the article

  • Smooth Sailing or Rough Waters: Navigating Policy Administration Modernization

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Life insurance and annuity carriers continue to recognize the need to modernize their aging policy administration systems, but may be hesitant to move forward because of the inherent risk involved. To help carriers better prepare for what lies ahead LOMA's Resource Magazine asked Karen Furtado, partner of Strategy Meets Action, to help them chart a course in Navigating Policy Administration Selection, the cover story of this month’s issue. The industry analyst and research firm recently asked insurance carriers to name the business drivers for replacing legacy policy administration systems. The top five cited, according to Furtado, centered on: Supporting growth in current lines Improving competitive position Containing and reducing costs Supporting growth in new lines Supporting agent demands and interaction It’s no surprise that fueling growth, both now and in the future, continues to be a key driver for modernization. Why? Inflexible, hard-coded, legacy systems require customization by IT every time a change is required. This in turn impedes a carrier’s ability to be agile, constraining their ability to quickly adapt to changing regulatory requirements and evolving market demands. It also stymies their ability to quickly bring to market new products or rapidly configure changes to existing ones, and also can inhibit how carriers service customers and distribution channels. In the article, Furtado advised carriers to ensure that the policy administration system they are considering is current and modern, with an adaptable user interface and flexible service-oriented architecture. She said carriers to should ask themselves, “How much do you need flexibility and agility now and in the future? Does it support the business processes and rules that are needed for you to be able to create that adaptable environment?” Furtado went on to advise that carriers “Connect your strategy to your business and technical capabilities before you make investment choices…You want to enable your organization to transform for the future, not just automate the past.” Unlocking High Performance with Policy Administration Transformation also was the topic of a recent LOMA webcast moderated by Ron Clark, editor of LOMA's Resource Magazine. The web cast, which featured speakers from Oracle Insurance and Capgemini, focused on how insurers can competitively drive high performance by: Replacing a legacy policy administration system with a modern, flexible platform Optimizing IT and operations costs, creating consistent processes and eliminating resource redundancies Selecting the right partner with the best blend of technology, operational, and consulting capabilities to achieve market leadership Understanding the value of outsourcing closed block operations Learn more by clicking here to access this free, one-hour recorded webcast. Helen Pitts, is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • OpenMP in Fortran

    - by user345293
    I very rarely use fortran, however I have been tasked with taking legacy code rewriting it to run in parallel. I'm using gfortran for my compiler choice. I found some excellent resources at https://computing.llnl.gov/tutorials/openMP/ as well as a few others. My problem is this, before I add any OpenMP directives, if I simply compile the legacy program: gfortran Example1.F90 -o Example1 everything works, but turning on the openmp compiler option even without adding directives: gfortran -openmp Example1.F90 -o Example1 ends up with a Segmentation fault when I run the legacy program. Using smaller test programs that I wrote, I've successfully compiled other programs with -openmp that run on multiple threads, but I'm rather at a loss why enabling the option alone and no directives is resulting in a seg fault. I apologize if my question is rather simple. I could post code but it is rather long. It faults as I assign initial values: REAL, DIMENSION(da,da) :: uconsold REAL, DIMENSION(da,da,dr,dk) :: uconsolde ... uconsold=0.0 uconsolde=0.0 The first assignment to "uconsold" works fine, the second seems to be the source of the fault as when I comment the line out the next several lines execute merrily until "uconsolde" is used again. Thank you for any help in this matter.

    Read the article

  • Does the managed main UI thread stay on the same (unmanaged) Operation System thread?

    - by Daniel Rose
    I am creating a managed WPF UI front-end to a legacy Win32-application. The WPF front-end is the executable; as part of its startup routines I start the legacy app as a DLL in a second thread. Any UI-operation (including CreateWindowsEx, etc.) by the legacy app is invoked back on the main UI-thread. As part of the shutdown process of the app I want to clean up properly. Among other things, I want to call DestroyWindow on all unmanaged windows, so they can properly clean themselves up. Thus, during shutdown I use EnumWindows to try to find all my unmanaged windows. Then I call DestroyWindow one the list I generate. These run on the main UI-thread. After this background knowledge, on to my actual question: In the enumeration procedure of EnumWindows, I have to check if one of the returned top-level windows is one of my unmanaged windows. I do this by calling GetWindowThreadProcessId to get the process id and thread id of the window's creator. I can compare the process id with Process.GetCurrentProcess().Id to check if my app created it. For additional security, I also want to see if my main UI-thread created the window. However, the returned thread id is the OS's ThreadId (which is different than the managed thread id). As explained in this question, the CLR reserves the right to re-schedule the managed thread to different OS threads. Can I rely on the CLR to be "smart enough" to never do this for the main UI thread (due to thread-affinity of the UI)? Then I could call GetCurrentThreadId to get the main UI-thread's unmanaged thread id for comparison.

    Read the article

  • Jquery Knob animate and change color

    - by user1468116
    I'd like to create a knob that switch color at some point. For example, at 35 is red, at 70 is yellow and 100 is green. I also would like to make it animate. this is my fiddle: http://jsfiddle.net/Tropicalista/jUELj/6/ My code is: enter code here $(document).ready(function() { $('.dial').val(13).trigger('change').delay(2000); $(".dial").knob({ 'min':0, 'max':100, 'readOnly': true, 'width': 120, 'height': 120, 'fgColor': '#b9e672', 'dynamicDraw': true, 'thickness': 0.2, 'tickColorizeValues': true, 'skin':'tron' }) });

    Read the article

  • Methodology to understanding JQuery plugin & API's developed by third parties

    - by Taoist
    I have a question about third party created JQuery plug ins and API's and the methodology for understanding them. Recently I downloaded the JQuery Masonry/Infinite scroll plug in and I couldn't figure out how to configure it based on the instructions. So I downloaded a fully developed demo, then manually deleted everything that wouldn't break the functionality. The code that was left allowed me to understand the plug in much greater detail than the documentation. I'm now having a similar issue with a plug in called JQuery knob. http://anthonyterrien.com/knob/ If you look at the JQuery Knob readme file it says this is working code: <input type="text" value="75" class="dial"> $(function() { $('.dial') .trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); But as far as I can tell it isn't at all. The read me also says the Plug in uses Canvas. I am wondering if I am suppose to wrap this code in a canvas context or if this functionality is already part of the plug in. I know this kind of "question" might not fit in here but I'm a bit confused on the assumptions around reading these kinds of documentation and thought I would post the query regardless. Curious to see if this is due to my "newbi" programming experience or if this is something seasoned coders also fight with. Thank you. Edit In response to Tyanna's reply. I modified the code and it still doesn't work. I posted it below. I made sure that I checked the Google Console to insure the basics were taken care of, such as not getting a read-error on the library. <!DOCTYPE html> <meta charset="UTF-8"> <title>knob</title> <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/hot-sneaks/jquery-ui.css" type="text/css" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js" charset="utf-8"></script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.21/jquery-ui.min.js"></script> <script src="js/jquery.knob.js"></script> <div id="button1">test </div> <script> $(function() { $("#button1").click(function () { $('.dial').trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); }); </script>

    Read the article

  • (Apache) RedirectMatch regex to match all directories except those in my list

    - by dotben
    I need to 301 redirect all requests coming in for requests to http//server.com to be redirected to http//newserver.com unless the request is for an arbitary list of directories we are maintaining on the legacy server (eg server.com/foo or server.com/bar) I'm having a hard time working out how best to set this up and the regexs. EG, I need: http//server.com/page1 redirect to http//newserver.com/page1 http//server.com/dir1/page2 redirect to http//newserver.com/dir1/page2 http//server.com/foo to load as normal http//server.com/bar/baz.html to load as normal ... because 'foo' and 'bar' are in my list of legacy dirs. I'm wondering if the way to do this is to some how catch the matches in my list and then redirect anything else as a wildcard over to the new server -- but I can't make it work. Can anyone help me with some regex and rewrites for those please? Thanks (apologies for fudging the http:// in the urls, ServerFault thinks I'm posting hyperlinks and won't otherwise let me post this)

    Read the article

  • Win 7 Privilege Level (Run as administrator) via GP or command line

    - by FinalizedFrustration
    Is there a way to set the Privilege Level for legacy software via group policy or on the command line? I have some legacy software, which we unfortunately cannot move away from. This software requires administrator access. I know I can go into the Properties dialog and check "Run this program as an administrator" on every single instance on every single one of my workstations, but that gets old after the 30th install. If I had my way, we would dump this software, find some software that did what we needed, was fully compliant with Win7 security best-practices and give everyone limited user accounts... However, I am not the boss, so everyone gets administrator accounts. Given that, I suppose I could just tell everyone to open the context menu and choose "Run as administrator", but we have some very, very, VERY low-tech users, and half of them might just choose "Delete" instead. Anyone know of a way to set this option on the command line? or better yet, through Group Policy?

    Read the article

  • Incredibly high latency for Ubuntu guest on Hyper-V

    - by Mark Henderson
    I've got several Ubuntu 10.04 virtual machines running as Hyper-V guests on Windows Server 2008 R2 SP1 and they're all perfectly fine. Today I installed my first Ubuntu 11.10 virtual machine and I'm seeing rediculous pings: These servers are all connected via gigabit to a local LAN, with almost no network traffic at all1, with a legacy network adapter in Hyper-V. I'm a bit of an Ubuntu n00b so I don't really know where to go from here. Any ideas? free -m reports: total used free shared buffers cached Mem: 485 470 15 0 63 299 -/+ buffers/cache: 107 378 Swap: 507 20 487 This is within a few mb of our other Ubuntu servers that are on 10.04. I removed the Legacy NIC and installed a Synthetic one in Hyper-V and this did improve the numbers, in that they're around 10-30ms now, but I would still be expecting <1ms response times. 1As a comparison, I have another Ubuntu 10.04 guest on Hyper-V almost 1,000km away that has a ping of 33ms

    Read the article

  • does heartbeat v3 support same resource agent types of pacemaker?

    - by Emre He
    As we know, Pacemaker supports three types of Resource Agents, ? LSB Resource Agents, ? OCF Resource Agents, ? legacy Heartbeat Resource Agents http://www.linux-ha.org/wiki/Resource_Agents does heartbeat v3 support above 3 types resource agent? or it only support LSB and legacy heartbeat resource agents? because we have only virtual ip and one service need to switch in ha cluster, so we decide not involve pacemaker, so we come to this question, for example we cannot monitor the application service by heartbeat, heartbeat only can handle to start it on active node. thanks, Emre

    Read the article

  • How do I use long names to refer to Group Managed Service Accounts (gMSA)?

    - by Jason Stangroome
    Commonly domain user accounts are used as service accounts. With domain user accounts, the username can easily be as long as 64 characters as long as the User Principal Name (UPN) is used to refer to the account, eg [email protected]. If you still use the legacy pre-Windows 2000 names (SAM) you have to truncate it to ~20 characters, eg mydomain\truncname. When using the New-ADServiceAccount PowerShell cmdlet to create a new Group Managed Service Account (gMSA) and a name longer than 15 characters is specified, an error is returned. To specify a longer name, the SAM name must be specified separately, eg: New-ADServiceAccount -Name longname -SamAccountName truncname ... To configure a service to run as the new gMSA, I can use the legacy username format mydomain\truncname$ but using usernames with a maximum of 15 characters in 2013 is a smell. How do I refer to a gMSA using the UPN-style format instead? I tried the longname$@domainfqdn approach but that didn't work. It also seems that the gMSA object in AD doesn't have a userPrincipalName attribute value specified.

    Read the article

  • oddities in interference of linux extened ACLs and 'regular' permissions

    - by abbot
    I've got some legacy code which checks that some file is read-only and readable only by it's owner, i.e. permissions set to 0400. I also need to give read-only access to this file to some other user on the system. I'm trying to set extended ACLs, but this changes 'regular' permission bits in a strange way also: $ ls -l hostkey.pem -r-------- 1 root root 0 Jun 7 23:34 hostkey.pem $ setfacl -m user:apache:r hostkey.pem $ getfacl hostkey.pem # file: hostkey.pem # owner: root # group: root user::r-- user:apache:r-- group::--- mask::r-- other::--- $ ls -l hostkey.pem -r--r-----+ 1 root root 0 Jun 7 23:34 hostkey.pem And after this the legacy code starts complaining that the file is group-readable (while it is actually not!) Is it possible to set the extended ACLs in such a way that some other user will also have read-only access, while the file will appear to have only 0400 'regular' permissions?

    Read the article

  • Restricting access to a subdirectory on linux

    - by David
    I'm looking for a way to make a directory accessible only to its parent directories. That is, suppose you have two directories, A and B, at the same level in the file hierarchy. Now suppose that you have a directory A' which is a subdirectory of A. I'd like to enforce that A is able to access the contents of A' but B is not. My problem is that I'd like to use a library (directory A) which builds on top of a legacy version of another library (directory A'). At the same time, I want to be able to use the newest version of this legacy library (directory B). I want to make sure that people aren't somehow using library A and linking against new library B by enforcing that library A must use library A'. I could just link A against library B, but then I'm risking compatibility.

    Read the article

  • Windows 8 to 8.1 Pro Upgrade SecureBoot Error

    - by Alexandru
    I upgraded from Windows 8 to Windows 8.1. I have an Alienware Aurora R4 with the latest BIOS firmware version, A09. Ever since I did the upgrade, I get a watermark on my desktop saying, "SecureBoot isn't configured correctly"...I would like to get rid of this watermark the correct way (not by hacking system DLLs). My BIOS shows me booting in UEFI mode, and I see that SecureBoot is actually disabled from there. I cannot enable SecureBoot, in either UEFI mode or Legacy Boot mode. Note, I can't even get Legacy Boot mode working without re-formatting my system which I really don't plan on doing, so my question is this...what has changed in the way Windows handles SecureBoot? As far as I can tell, I do not have SecureBoot enabled, and it is trying to tell me that it isn't configured correctly. Why does it even care to check if my BIOS doesn't have it on anyways?! Its so frustrating!

    Read the article

  • Canon HV20 recognized, but no drivers in Windows 7 x64

    - by Tuminoid
    My Canon HV20 camcorder is properly recognized when connected via Firewire to Windows 7 x64, but no drivers are installed for it. Windows or I cannot locate any drivers for it as, but it should be working off-the-shelf. I googled a lot, and found instructions to set IEEE 1394 host to legacy mode via Device Manager, but Windows doesn't offer me the legacy option at all. If I check the properties of Canon&HV20 device in Other devices section it says The drivers for this device are not installed. (Code 28) There is no driver selected for the device information set or element. It used to work just fine on my previous installation of Vista x64 and same hardware :/

    Read the article

  • How to tell if my sound card is listed in Device Manager?

    - by Bruhan
    The sound on my computer suddenly stopped working. When I check Sounds and Audio Devices in the Control Panel, I get "No Audio Device" with everything grayed out. When I check the Device Manager under "Sound, video and game controllers" I see the following list: Audio Codecs Legacy Audio Drivers Legacy Video Capture Devices Media Control Devices MPU-401 Compatible MIDI Device Standard Game Port Video Codecs None of these looks like my sound card. Of course, my sound "card" is not really a sound card, it's integrated with the nVidia-nForce motherboard. I'm running Windows XP. So is one of the above my sound device, or is the OS not detecting it? If the latter, how do I get it to detect it?

    Read the article

  • SATA Devices not showing up when in UEFI mode

    - by Dan Barzilay
    I'm trying to install Windows and the bios should be set to UEFI mode. The problem is that all SATA devices aren't showing up (shows as if there aren't any) so I can't boot from the installation CD (it's just not there). The weird thing is that when set to LEGACY mode they all show up.. SATA mode is set to AHCI and I'm on Lenovo Y510P. I have a Linux OS installed that is accessible only when BIOS is in LEGACY mode (otherwise the hard drive it's on is not available) I also tried reseting the BIOS settings which didn't help.. Comment please if more details needed Extra details: Computer model: Lenovo IdeaPad Y510P (not overcloacked) Installed Linux OS version: Linux 3.7-trunk-amd64 x86_64 Trying to install Windows: Windows 7 Ultimate 64bit BIOS Information: Vendor: LENOVO Version: 74CN26WW(V1.07) Update: Using user1608638 answer and suggestion of using the USB flash drive as the boot device instead of the CD/DVD method I succeeded in installing Windows 7! (Thanks alot user1608638)

    Read the article

  • Coding solution to WAR installation error (Websphere Portal 6.0) ?

    - by Scott Leis
    I have a Websphere Portal application containing several portlets for which I'm currently working on some changes. A week ago, the WAR file produced by Rational Application Developer could be installed on the Portal server with no problems. Yesterday I made some seemingly minor changes to two JSP files and their associated "pagecode" Java files, and attempting to update the WAR on the server (using the Portal Administration web interface) now produces an error message. The WAR upload works, and the system shows me the correct list of portlets in the WAR file, but clicking "Finish" gives me a page with the error message "EJPAQ1319E: Cannot install the selected WAR file. View Details". Clicking the "View Details" link gives me a page with the following text: EJPAQ1319E: Cannot install the selected WAR file. com.ibm.portal.WpsException: EJPAQ1319E: Cannot install the selected WAR file. at com.ibm.wps.portlets.portletmanager.actions.DoInstallWebModuleAction.installPortletFromFormFile(DoInstallWebModuleAction.java:633) at com.ibm.wps.portlets.portletmanager.actions.DoInstallWebModuleAction.doExecute(DoInstallWebModuleAction.java:159) at com.ibm.wps.portlets.adminstruts.actions.BaseAction.execute(BaseAction.java:64) at com.ibm.wps.portlets.struts.WpsRequestProcessor.processActionPerform(WpsRequestProcessor.java:338) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at com.ibm.wps.portlets.struts.WpsStrutsPortlet.processActionPerformed(WpsStrutsPortlet.java:1947) at com.ibm.wps.portlets.struts.WpsStrutsPortlet.actionPerformed(WpsStrutsPortlet.java:1637) at com.ibm.wps.portlets.adminstruts.WpsAdminStrutsPortlet.actionPerformed(WpsAdminStrutsPortlet.java:261) at com.ibm.wps.pe.pc.legacy.SPIPortletInterceptorImpl.handleEvents(SPIPortletInterceptorImpl.java:323) EJPPE0020E: It is not allowed to install a JSR 168 compliant over a 4.x portlet application. com.ibm.wps.command.applications.AppWarFileException: EJPPE0020E: It is not allowed to install a JSR 168 compliant over a 4.x portlet application. WrappedException is: com.ibm.wps.pe.mgr.exceptions.InvalidWarFileException: EJPPE0020E: It is not allowed to install a JSR 168 compliant over a 4.x portlet application. at com.ibm.wps.command.applications.AbstractApplicationsCommand.throwAppMgrException(AbstractApplicationsCommand.java:492) at com.ibm.wps.command.applications.UpdatePortletApplicationCommand.execute(UpdatePortletApplicationCommand.java:165) at com.ibm.wps.portlets.portletmanager.actions.DoInstallWebModuleAction.installPortletFromFormFile(DoInstallWebModuleAction.java:510) at com.ibm.wps.portlets.portletmanager.actions.DoInstallWebModuleAction.doExecute(DoInstallWebModuleAction.java:159) at com.ibm.wps.portlets.adminstruts.actions.BaseAction.execute(BaseAction.java:64) at com.ibm.wps.portlets.struts.WpsRequestProcessor.processActionPerform(WpsRequestProcessor.java:338) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at com.ibm.wps.portlets.struts.WpsStrutsPortlet.processActionPerformed(WpsStrutsPortlet.java:1947) EJPPE0020E: It is not allowed to install a JSR 168 compliant over a 4.x portlet application. com.ibm.wps.pe.mgr.exceptions.InvalidWarFileException: EJPPE0020E: It is not allowed to install a JSR 168 compliant over a 4.x portlet application. at com.ibm.wps.pe.mgr.AbstractApplicationManagerImpl.updateWebModule(AbstractApplicationManagerImpl.java:1338) at com.ibm.wps.pe.mgr.AbstractApplicationManagerImpl.updateWebModule(AbstractApplicationManagerImpl.java:1255) at com.ibm.wps.command.applications.UpdatePortletApplicationCommand.execute(UpdatePortletApplicationCommand.java:135) at com.ibm.wps.portlets.portletmanager.actions.DoInstallWebModuleAction.installPortletFromFormFile(DoInstallWebModuleAction.java:510) at com.ibm.wps.portlets.portletmanager.actions.DoInstallWebModuleAction.doExecute(DoInstallWebModuleAction.java:159) at com.ibm.wps.portlets.adminstruts.actions.BaseAction.execute(BaseAction.java:64) at com.ibm.wps.portlets.struts.WpsRequestProcessor.processActionPerform(WpsRequestProcessor.java:338) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274) at com.ibm.wps.portlets.struts.WpsStrutsPortlet.processActionPerformed(WpsStrutsPortlet.java:1947) All I've been able to find about this error via Google is the following in the Websphere Portal documentation: EJPPE0020E: It is not allowed to install a {0} over a {1} portlet application. Explanation: A portlet application containing legacy portlets can only be updated with another portlet application that contains legacy portlets. The same is true for standard portlet applications. User Response: Modify the portlet.xml of the application such that it matches the original API type, standard or legacy and try again. However, the "portlet.xml" file has not changed in about a month, and I've done several WAR updates for this application in that time with no problems. The problem seems to be caused by the code changes I did yesterday, but I have no clue why a few lines of code would do this. Any ideas?

    Read the article

  • Why is C++ backward compatibility important / necessary?

    - by Giorgio
    As far as understand it is a well-established opinion within the C++ community that C is an obsolete language that was useful 20 years ago but cannot support many modern good programming practices, or even encourages bad practices; certain features that were typical of C++ (C with classes) during the nineties are also obsolete and considered bad practice in modern C++ (e.g., new and delete should be replaced by smart pointer primitives). In view of this, I often wonder why backward compatibility with C and obsolete C++ features is still considered important: to my knowledge there is no 100% compatibility, but most of C and C++ are contained in C++11 as a subset. Of course, there is a lot of legacy code and libraries (possibly containing templates) that are written using a previous standard of the language and which still need to be maintained or used in connection with new code. Nevertheless, maybe it would still be possible to drop obsolete C and C++ features (e.g. the mentioned new / delete) from a future C++ standard so that it is impossible to use them in new code. In this way, old and dangerous programming practices would be quickly banned from new code, and modern, better programming practices would be enforced by the compiler. Legacy code could still be maintained using separate compilation (having C alongside C++ source files is already a common practice). Developers would have to choose between one compiler supporting the old-style C++ that was common during the nineties and a compiler supporting the modern C++? style (the question mark indicates a future, hypothetical revision). Only mixing the two styles would be forbidden. Would this be a viable strategy for encouraging the adoption of modern C++ practices? Are there conceptual reasons or technical problems (e.g. compiling existing templates) that make such a change undesirable or even impossible? Has such a development been proposed in the C++ community. If there has been some extended discussion on the topic, is there any material on-line?

    Read the article

  • The 'desktops' move to Oracle

    - by [email protected]
    The move to Oracle has been most interesting.  Here we have an organization who are interested in what they are interested in.  Not so much in things that aren't 'core'. The legacy Sun desktop products are things that Oracle is interested in.  To that end there are some changes coming to policies and products - and from my perspective they are all good. Very good. One of the changes to the Product suite is that we are now referred to as part of the Virtualization team, falling under Oracle's Chief Corporate Archtiect, Edward Screven.  Edward says that the Products were a 'gem' found inside the great pile of stuff that was Sun. Another change is that while StarOffice/Open Office has been certainly endorsed by Oracle, and it also falls under Edward's purview, and here has been a push on to use it as opposed to... well... you know.    It is not, however, part of the Virtualization team's product suite any more. There are some other really interesting changes coming that you will hear about quite soon.  The big message for today, though, is that Sun Rays, Secure Global Desktop, VirtualBox, and Oracle VDI software are all still alive and kicking and moving forward.  Infact, at the Oracle earnings call last week, Charles Phillips announced more significant wins with Sun Rays in the US Federal Governmnet space.  He could have talked about all kinds of legacy Sun products, but chose to mention Sun Rays in the first Quarterly statement since the acquisition of Sun - you should see this as a very good sign indeed. More soon - until then...

    Read the article

  • Tuxedo 11gR1 Released

    - by todd.little
    I've been a little quiet the last several months as the Tuxedo team has been very busy. Today Oracle announced the 11gR1 release of the Tuxedo product family. This release includes updates to Tuxedo, TSAM, and SALT, as well as 3 new products that Oracle is announcing today. These 3 new products are the Oracle Tuxedo Application Runtime for CICS and Batch, Oracle Application Rehosting Workbench, and the Tuxedo JCA Adapter. By providing a CICS equivalent runtime and a rehosting workbench to automate the rehosting of COBOL CICS code, JCL procedures, data definitions, and data, Oracle has significantly lowered the effort and risk to rehost mainframe CICS and Batch applications onto the Tuxedo runtime on open systems. By moving off proprietary legacy mainframes, customers have experienced better performance and achieved a 50-80% lowering of their total cost of ownership. The rehosting tools allow the COBOL business logic to remain unchanged and automate the replacement of CICS statements with calls to Tuxedo. The rehosted code can then run on open systems 'as-is'. Users can still use the same TN3270 interfaces they are used to eliminating the need for retraining. Batch procedures can be run and managed under a JES2 like environment. For the first time, customers have the tools and enterprise class runtime environment to move their key legacy assets off the mainframe and on to distributed open systems whether the application uses 250 MIPS, 25,000 MIPS, or more. More on these exciting new options in additional blog entries.

    Read the article

  • Hudson.. another Continuous Integration tool

    - by Narendra Tiwari
    In my previous posts I discussed about Cruisecontrol.net and its legacy support to .Net development. Hudson  is yet another continuous integration tool. Hudson is also free like CCNet and built in java. - CCNet has its legacy support to .Net applications where as Hudson can be easily configured on both the environments (.Net and Java). - One of the major differences in CCNet and Hudson is the richer GUI of Hudson provide user interactive screens for project configuration where as in CCNet we have to play with a few xml configuration files. Both the tools are capable of providing basic features of continuous integration e.g.:- - Source Control configuration - Code Compilation/Build - Ad hoc plugin tools to be configured along with compilation Support for adhoc tools seems to be bigger with CCNet e.g. There are almost every source control plugin available with CCNet where as Hudson has support for limited source control servers. Basically there is an interseting point to see is that there are 2 major partsof whole CI system one performed by build tool and rest. Build tool takes care of all adhoc plugin tools  so no matter if CI tool does not have plugin for that tool if thet tools provides command line support that can be configured in build tool and that build tool is then configured with CI tool inturn. For example if I have a build script configured in MSBuild and CCNet can be easily switched to Hudson. Here we need not to change anything in build script we just need to configure MSBuild on Hudson and pass the path of script file and thats it... all is same. Hudson Resources:- - https://hudson.dev.java.net/ - http://wiki.hudson-ci.org/display/HUDSON/Meet+Hudson - http://wiki.hudson-ci.org/display/HUDSON/Plugins - http://callport.blogspot.com/2009/02/hudson-for-net-projects.html Java support on CCNet http://confluence.public.thoughtworks.org/display/CC/Getting+Started+With+CruiseControl?focusedCommentId=19988484#comment-19988484 Please share your thoughts...

    Read the article

  • links for 2011-03-02

    - by Bob Rhubart
    Oracle Technology Network Architect Day: Denver Registration is now open. Sessions will cover IT Optimization and consolidation, cloud computing, the evolving role of enterprise IT, and more. (tags: oracle otn entarch event denver) SOA Suite Integration: Part 2: A basic BPEL process (The Shorten Spot) The latest post in Anthony's Shorten's series about SOA Suite integration with Oracle Utilities Application Framework. (tags: oracle otn soa bpel soasuite) ADF: How to create web service based ADF pages The first in promised series of three posts on the topic by Marianne Horsch. (tags: oracle soa webservices adf) David Butler: MDM Poised for Growth (Oracle Master Data Management) David says: "Businesses are talking about the need to fix master data before they can successfully move forward on SOA initiatives. And the growing demands for compliance continue to be a major driver." (tags: oracle otn mdm) Cloud governance is about more than security | The Pervasive Data Center - CNET News Legal and regulatory procedures, transparency, service levels, indemnification, and more are all part of a broader governance landscape that requires IT to work closely with business users. Read this blog post by Gordon Haff on The Pervasive Data Center. (tags: ping.fm) Senthilkumar Rajendran's Blog: Horizontal Scaling OBIEE 11g (tags: ping.fm) InfoQ: Searching Without Objectives Kenneth O. Stanley considers that innovation is stifled when we are strictly following a high goal, and we would progress more when we are inclined to discovery rather than following an objective. (tags: ping.fm) InfoQ: Brownfield Software - Industrial Waste or Business Fertilizer? Josh Graham addresses 10 myths related to working on legacy software, attempting to prove that one can make good use of legacy code without having to rewrite the entire thing. (tags: ping.fm)

    Read the article

  • Attempting to dual boot Ubuntu and Windows 7 on Sony Vaio with Insyde H2O BIOS

    - by zach
    My situation is the same that is addressed here Sony VAIO with Insyde H2O EFI bios will not boot into GRUB EFI and here http://www.hackourlife.com/sony-vaio-with-insyde-h2o-efi-bios-ubuntu-12-04-dual-boot I tried to install Ubuntu 12.04 from the Live CD alongside my current Windows 7. I have to switch my BIOS to legacy mode in order to boot from CD. If I were to do a normal installation and remain in legacy mode, the BIOS will display "operating system not found". If I switch back then the BIOS just boots to windows. To solve the problem, I tried following the steps in the previous two articles. My drive is partitioned as: sda1 FAT32 Location of Windows EFI files (flagged as boot in Ubuntu install) sda2 unknown sda3 NFTS Windows C: sda4 ext4 Ubuntu root sda5 swap sda6 ext4 Ubuntu home I was a little confused by the requirement in the second article to "be careful to install Grub bootloader in /dev/sda3" In my case, the relevant partition is sda1. I have tried three things: setting the sda1 mount point as /boot, as /boot/efi, and as the special reserved grub partition. In each install I indicated that grub should be installed in sda1. After each install I reboot to the live CD and look in the sda1. I see EFI/Boot and EFI/Windows, but no EFI/Ubuntu and consequently no grubx64.efi. I understand the recommended procedure of moving grubx64.efi into the EFI/Boot directory and replacing the present bootx64.efi file, but I see no grubx64.efi and I don't know where it should be.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >