Search Results

Search found 9235 results on 370 pages for 'disk cloning'.

Page 280/370 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • SQLAuthority News – Mark the Date: October 16, 2013 – Introducing NuoDB Blackbirds: THE Distributed Database

    - by Pinal Dave
    I am very excited to announce first on this blog about the release of NuoDB Blackbirds (NuoDB Release 2.0). NuoDB is my favorite application to work with data now a days. They are increasingly gaining market share as well as brining out new features with their every new release. I was very excited when I learned that NuoDB is releasing their flagship release of 2.0 on October 16, 2013. Interesting enough I will be in USA while this release happens and I will be watching it live during my day time. Even though if I had to stay up the entire night to just watch this release, I would do it. Here is the details of the announcements: Introducing NuoDB Blackbirds: THE Distributed Database Date: October 16, 2013 Time: 1:00 PM EDT Location: Online Registration Link What is the best DBMS architecture to handle today’s and tomorrow’s evolving needs? The days of shared disk are over. The times are “a-changin” and IT infrastructure has to change with them. Join NuoDB live for the introduction of our latest major product release, NuoDB Blackbirds, and take a look at why the NuoDB distributed database architecture is the only answer for customers like Fathom Voice, a leading provider of Voice Over IP (VoIP). NuoDB CEO, Barry Morris, welcomes Cameron Weeks, CEO of Fathom Voice to discuss how his company is using DBMS to break away from the pack and become the hottest player in VoIP. The webcast will include demonstrations of a single, logical database running in multiple geographies and a live Q&A. If due to any reason, you cannot watch it live, do not worry at all, just register at this Registration Link, as after the event you will get the link to watch the event on-demand. You can watch the launch event at any time if you have registered for the launch. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • dpkg behaving strangely?

    - by Tom Henderson
    When I use apt to get a package, I have been receiving the same error message. Here is an example trying to install wicd (which is already installed): Reading package lists... Building dependency tree... Reading state information... wicd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 3 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up tex-common (2.06) ... debconf: unable to initialize frontend: Dialog debconf: (Dialog frontend requires a screen at least 13 lines tall and 31 columns wide.) debconf: falling back to frontend: Readline Running mktexlsr. This may take some time... done. No packages found matching texlive-base. dpkg: error processing tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of texlive-binaries: texlive-binaries depends on tex-common (>= 2.00); however: Package tex-common is not configured yet. dpkg: error processing texlive-binaries (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of dvipng: dvipng depends on texlive-base-bin; however: Package texlive-base-bin is not installed. Package texlive-binaries which provides texlive-base-bin is not configured yet. dpkg: error processing dvipng (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: tex-common texlive-binaries dvipng E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure if this is a problem with apt or with dpkg, but it certainly doesn't look good!

    Read the article

  • Keyboard layout - certain keys work with AltGr, other doesn't

    - by user23122
    I run Ubuntu 12.04 in VirtualBox 4.3.14 on Windows 7 with a Swedish keyboard layout. In Windows everything works fine but in Ubuntu some keys/characters (the most important for a programmer) doesn't work. This is the result from pressing the keys in the top row 1234567890+´ (Unmodified top row on keyboard) @£$€ {[]}\ (Windows with AltGr) ¡ £$€¥ ± (Ubuntu with "AltGr") More characters are broken (pipe | is a notable example) but the top row is the biggest problem. I can workaround this by enabling "direct connection" from my USB keyboard to VirtualBox but then I have to manually disable that every time I switch out of VirtualBox. I have tried different keyboard layout, sometimes @ et al works but then other characters are broken. I also tried sudo dpkg-reconfigure keyboard-configuration with default values, but it didn't change anything. I have guest additions installed (from the built in virtual CD). I got my VB disk image from a colleague who does not have this problem, however, he does not have guest additions installed (and hence can't use a higher resolution than 1024x768, and I need to run Ecplipse...). He also have different installation of Virtual Box and Windows. For example, the key for 2 should, in Ubuntu, produce four different characters, 2"²@. The first three works fine, including superscript 2 that requires AltGr-Shift-2, it is just plain AltGr-2 to get @ that does not work on this key (and all the other keys I have problem with). Any ideas for a fix?

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle provides many interesting ways to ensure High Availability. Dataguard configurations, RAC configurations or even both (as recommended for a Maximum Available Architecture - MAA) are the most frequently found. But when it comes to protecting a system with an Active/Passive architecture with failover capabilities, one often thinks to expensive third party cluster systems. Oracle Clusterware technology, which comes free with Oracle Database, is – in the knowing of most people - often linked to Oracle RAC and therefore, is rarely used to implement failover solutions. 11gR2 Clusterware – which is part of Oracle Grid Infrastructure - provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make “failover-able'” and, therefore to protect, almost every kind of application (from xclock to the more complex Application Server) In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional. Unfortunately, there can always be a typo or a mistake. This blog entry is not a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it…   Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 – Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let’s name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill –9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let’s save the definition of this resource, for future use : mkdir –p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let’s remove it from the OCR definitions, we don’t need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop and check the database. It is self-explaining and contains nothing special. Just be aware that it is run as Oracle user (because of the ACL property – see later) and needs to know about the environment. It also needs to be present on every node of the cluster. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more “MS$ like” method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions.   References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • Unable to install Dockmanager

    - by Mark Rooney
    I have Docky installed on Ubuntu 10.10 64bit and noticed after a recent upgrade my 'Helpers' are no longer available. After some research I found that Dockmanager is no longer installed either. I am unable to install it via the Software centre or via terminal using apt-get, the following error is returned; mark@Sonata:~$ sudo apt-get install dockmanager Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: dockmanager 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/94.4kB of archives. After this operation, 430kB of additional disk space will be used. (Reading database ... 162015 files and directories currently installed.) Unpacking dockmanager (from .../dockmanager_0.1.0~bzr83-0ubuntu1~10.10~dockers1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/dockmanager_0.1.0~bzr83-0ubuntu1~10.10~dockers1_amd64.deb (--unpack): trying to overwrite '/usr/share/dockmanager/data/skype_invisible.svg', which is also in package faenza-icon-theme 0.8 dpkg-deb: subprocess paste killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/dockmanager_0.1.0~bzr83-0ubuntu1~10.10~dockers1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) mark@Sonata:~$ Can anyone advise on how to fix this?

    Read the article

  • Linux Mint Maya Freezes

    - by timuçin
    Linux Mint freezes after a couple of seconds the desktop loads, in a way that I have to shut the power in order to reboot; the mouse doesn't move, ctrl+alt+f1 doesn't do anything and I think even the hard disk stops. This doesn't happen every start but when it does happen, I have to start recovery mode and run the option "dpkg", the description is "repair broken packages" or something like that. If I don't do that and start the system normally the samething happens again. I have some clues that might help: The first time I installed Mint I had to install my wireless driver manualy. The system didn't freeze before this but since I installed the driver immediately after the Mint installation that might just easily be a coincidence. Even so after I discovered the dpkg trick, for the first couple of times I did it, I found my wireless driver uninstalled and I had to reinstall it. The thing is I can't be sure that the problem is my wireless driver because the relation is not direct enough. Still letting you know what my wireless apapter might help: Realtek L 8723 The next thing I am going to do is to wait until it happens again and post the system log here.

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Hardware settings reviewing

    - by dino99
    Get some hardware related errors logged into dmesg: oem@dub:~$ dmesg | grep ata10 [ 1.007989] ata10: PATA max UDMA/133 cmd 0xa800 ctl 0xa480 bmdma 0xa408 irq 18 [ 1.691664] ata10: prereset failed (errno=-19) [ 1.691670] ata10: reset failed, giving up oem@dub:~$ dmesg | grep ata2 [ 0.990290] ata2: SATA max UDMA/133 abar m1024@0xfebfb800 port 0xfebfb980 irq 45 [ 1.688011] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.688055] ata2.00: unsupported device, disabling [ 1.688057] ata2.00: disabled As I understand, its related to my old Seagate SATA HDD, and the PATA CDROM. These errors are quite new, so I feel that their settings (dma, write-cache, ...) have been modified by some upgrades. I've already used hdparm to set write-cache off on the HDDs. But it seems like I need to review some other setting(s) too. With oldest distro it was easy to know about the hardware settings, but now on Quantal/Precise its deeply hidden for the average user. So i would like to know how to view/modify these settings. About the CDROM reader, the problem is different: - the system don't identify it with an UUID; but only with ATAPI or by-id oem@dub:~$ dmesg|grep 'ATAPI' [ 1.308611] ata3.00: ATAPI: TSSTcorp CDDVDW SH-S203D, SB00, max UDMA/100 oem@dub:~$ ls -l /dev/disk/by-id/ ...... lrwxrwxrwx 1 root root 9 oct. 1 06:42 ata-TSSTcorp_CDDVDW_SH-S203D -> ../../sr0 .....

    Read the article

  • How do I encrypt but share a number of folders?

    - by d3vid
    I want to achieve the following functionality. Is it possible? Boot up computer (possibly via WakeOnLan or WakeOnPlan). Either be automatically logged in, or log in via login screen, or log in remotely. I change this behavior occasionally, so full disk encryption wouldn't work for me because it requires a password on bootup (which would it would prevent the remote bootup options, and the automatic login option). I am only interested in encrypting data, not the entire harddrive. Once logged in either: a launcher/tray icon is available to launch encryption app (preferred) run encryption app from the dash Prompted to unlock encrypted folder(s) individually. Unlocked folders are available to: me, apps I am running (e.g. editors, SpiderOak) Ideally, folders that I share with bindfs can be locked/unlocked by other users too. A key point is that once I have unlocked an encrypted folder, I don't want to have to think about it again. I currently achieve this via TrueCrypt (except for the last part). Unfortunately TrueCrypt isn't well integrated with Ubuntu (licensing issues prevent Debian from including it in their repo, the interface isn't quite integrated with Unity, setting it as a startup app doesn't quite work, sharing encrypted folders isn't really part of its design). Is there an alternative to TrueCrypt that is better integrated with the Ubuntu GUI and would suit this workflow?

    Read the article

  • Package manager borked with gforge

    - by Leif Andersen
    I've been having a problem with the package manager. I seemed to have installed gforge, partially, but it keeps giving me errors whenever I install something. (Note that the thing I'm trying to install actually does get installed, but there is always an error returned). Here it is: Creating /etc/gforge/httpd.conf Creating /etc/gforge/httpd.secrets Creating /etc/gforge/local.inc Creating other includes invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--configure): subprocess installed post-installation script returned error exit status 100 Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) When I try to remove it with: sudo apt-get purge gforge-common I get this: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: gforge-common* gforge-db-postgresql* 0 upgraded, 0 newly installed, 2 to remove and 9 not upgraded. 1 not fully installed or removed. After this operation, 5,853kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 717305 files and directories currently installed.) Removing gforge-db-postgresql ... Replacing config file /etc/postgresql/8.4/main/pg_hba.conf with new version invoke-rc.d: unknown initscript, /etc/init.d/postgresql-8.4 not found. dpkg: error processing gforge-db-postgresql (--purge): subprocess installed pre-removal script returned error exit status 100 Removing gforge-common ... Purging configuration files for gforge-common ... Processing triggers for man-db ... Errors were encountered while processing: gforge-db-postgresql E: Sub-process /usr/bin/dpkg returned an error code (1) And it complains until I do a: sudo apt-get install -f At which point gforge is re-installed. I'm out of ideas, does anyone else have any other ideas with what might be wrong, and more importantly, how I can fix it? Thank you.

    Read the article

  • The Oracle VM Hall of Fame

    - by Kristin Rose
    “Take me out to the ball game, take me out to the crowd. Buy me a new Oracle VM, I want my competition to be history!”...Yes, baseball is in full swing, and as we make our way to the closing of the quarter, Oracle is ready to “knock it out of the park” with its newly updated release of Oracle VM 3.1. This home run of a server virtualization solution will let you deploy software faster, as it intelligently manages your entire infrastructure, from application to disk. As if that wasn’t enough, the competition can’t even get on base! Have a look at the final score below: Partners will be hitting grand slams left and right because management tools, application templates and single source support, have all teamed up to create one heck of a curve ball for the competition, but more importantly, an absolute first draft pick for our partners. With no license cost and an affordable enterprise support cost, crowds have gathered to see this ‘All Star’ play some hard ball. Watch as Jeff Doolan, Sr. Director of Linux and Virtualization Channel Sales at Oracle, goes into more depth on how Oracle VM is a real game changer and eliminates the competition.Adding to the line-up are two key components of Oracle VM 3.1: Enhanced Ease-of-use: The new GUI design is engineered for faster execution of workflow and to maximize ease of use and reduce deployment time. Administrators have more time to spend at the ball park or focus on the business.New Oracle VM Templates: such as the Oracle E-Business Suite 12.1.3; Oracle PeopleSoft FSCM 9.1; Oracle Enterprise Manager 12c; Oracle Linux 5.8; Oracle Linux 6.1; Oracle Solaris 11 – which add to the existing 100+ existing templates that are ready for download. Oracle VM Templates are pre-configured as an entire stack including OS and application fully tested, production ready and certified from Oracle.For more information on Oracle newest player, Oracle VM 3.1, read this press release or visit our technology information page. Batter Up,The OPN Communications Team

    Read the article

  • Error with APE Server Installation

    - by sadmicrowave
    I was trying to install APE-Server from the .deb file at the ape-server homepage (www.ape-project.org) and I ran into an error so wanted to try removing the installation and reinstalling. I did a sudo apt-get remove ape-server which ran successfully but left ape-server folders in my /etc/ and /etc/init.d locations. Me being an idiot new comer to linux decided that manually delete those folders. Now when I reinstall the ape-server those folders don't get recreated and therefore I cannot send the /etc/init.d/ape-server [option] command because the folder is not found. When I try to sudo apt-get purge (or remove) ape-server I get the following sudo apt-get purge ape-server Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: ape-server* 0 upgraded, 0 newly installed, 1 to remove and 92 not upgraded. 1 not fully installed or removed. After this operation, 1,753kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 43924 files and directories currently installed.) Removing ape-server ... invoke-rc.d: unknown initscript, /etc/init.d/ape-server not found. dpkg: error processing ape-server (--purge): subprocess installed pre-removal script returned error exit status 100 update-rc.d: /etc/init.d/ape-server: file does not exist dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ape-server E: Sub-process /usr/bin/dpkg returned an error code (1) My question is; how do I remove all of the ape-server installation packages that were installed so I can reinstall from scratch?

    Read the article

  • Very slow write access to SSD disks on some Asus P8Z77 motherboards

    - by lenik
    I have Asus P8Z77-V LK motherboard, that ran Mint 13 (based on Ubuntu 12.04) just perfectly, but recently I've tried to install Mint 17 and noticed abysmal write performance. Write speed on SSD disk was about 1.5MB/sec, when it's supposed to be in 150-250MB/sec range. For write testing I've used dd if=/dev/zero of=/dev/sda bs=10M count=10 while booted up from LiveCD. I have also tested the read speed with hdparm -tT /dev/sda and got about 440MB/sec -- that's normal. I can tell, the read performance has not degraded at all and is not an issue here. Since I had a few different SSD disks and few motherboards, I've tested and tested and here are results: Asus P8H77 works fine with Mint13, has very slow write speed starting from Mint14. Asus P8Z77-V LK works with Mint13, has very slow write speed starting from Mint14. Asus P8Z77-V PRO works with Mint13, and works just fine with Mint14, 15, 16 and 17. The only difference between "PRO" version and others is that it has extra SATA controller onboard (in addition to the Z77 chipset SATA controller) providing extra 2 SATA ports. SSD disks work fine with "PRO" version when connected to the native SATA ports as well as to the ports provided by extra SATA controller, so this does not look like a hardware issue. As far as I can tell, there's something changed in the kernel while going from 3.2 to 3.5, that affects the detection of onboard SATA controller for Asus P8*77 motherboards, that screws up the write speed for SSD drives. Could anyone shed some light on how to fix this issue or, possibly, give a pointer to a more suitable place to ask this question?

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • Starcraft II on Wine - Ubuntu 12.04

    - by Robert Segerson
    I know this might seem like a repeat question, but I've literally looked over 100 threads on how to get SC2 to work on ubuntu 12.04 through wine and none have worked. I downloaded wine new today, and inserted my purchased SC2 disk. When I try to open the installer (installer.exe) with wine, an error appear saying: "No installer data could be found. If this problem persists, please contact Blizzard Technical Support." I searched for solutions to mediate this issue and was directed to the following source: http://ubuntuforums.org/showthread.php?t=1435314 . I followed directions through until I got to ls I tried various combinations of (ls installer.exe, ls'/home/rothic/Desktop/Installer.exe, etc.) All come back with "no such file or directory". Im not sure what to do, the next step would be to replace the "starcraft_installer" with the starcraft installer file, which Im not sure how to do (very new to linux). I tried WINEPREFIX=~/.wine_starcraft/ wine starcraft_installer and it says "wine: cannot find L"C:\windows\system32\starcraft_installer.exe" despite it being on the desktop as advised. Any suggestions on where to go from here?

    Read the article

  • I have a bad install of Windows on another hard drive and it won't let me install a fresh copy. How do I fix it in Ubuntu 12.04?

    - by Dana LaBerge
    Basically, there was a security issue in the drivers for my graphics card. It was a 64-bit card and I installed 32-bit Windows. Apparently, before SP1 was available, which fixed that issue, 6 trojan horses got in. They stopped SP1 from installing. After going through the ringer several times, I finally talked to a person who knew the problem. It was something about how the drivers tried to transfer between the 32-bit OS and the 64-bit card that left me open. Ever since, my computer has been slow and has had weird issues. Like tinypic wouldn't ever load. Also, certain programs wouldn't install. So I eventually talk to the dude that knew the problem and he takes the reigns and does some diagnostics. He tells me that to fix it I have to format the hard drive and do a fresh install. I'm okay with that because I was planning on it anyway, to upgrade to the 64-bit version. The problem is, how do I do that? I have the disk to install the new copy, but when I go to install it, it tells me I can't and to check the log file. However, I don't know where that log file is, and it wiped my install of Windows out. How do I find the file and as a different route to get to the goal, how do I zero out the drive from Ubuntu 12.04? (I installed the 64-bit version just the other day)

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • DVD RW+ Not showing up

    - by Manywa R.
    I'm running Ubuntu 12.10 on a Toshiba Satellite Pro A120 and my built in DVD Drive is not opening any cd/dvd/dvd rw that am trying to play on them. the drive seems to be mounted and recongnized: Output of sudo lshw: ... *-cdrom description: DVD-RAM writer product: DVD-RAM UJ-841S vendor: MATSHITA physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 1.40 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=ready *-medium physical id: 0 logical name: /dev/cdrom and the disk seems to start but hang with the dvd drive LED solid amber.... the output of jun@jun-Satellite-Pro-A120:~$ dmesg | grep "sr0" [679396.184901] sr 1:0:0:0: [sr0] Unhandled sense code [679396.184910] sr 1:0:0:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [679396.184920] sr 1:0:0:0: [sr0] Sense Key : Hardware Error [current] [679396.184931] sr 1:0:0:0: [sr0] Add. Sense: Id CRC or ECC error [679396.184942] sr 1:0:0:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 [679396.184965] end_request: I/O error, dev sr0, sector 0 [679396.184975] Buffer I/O error on device sr0, logical block 0 [679396.184984] Buffer I/O error on device sr0, logical block 1 [679396.184990] Buffer I/O error on device sr0, logical block 2 [679396.184996] Buffer I/O error on device sr0, logical block 3 [679396.185002] Buffer I/O error on device sr0, logical block 4 [679396.185008] Buffer I/O error on device sr0, logical block 5 [679396.185014] Buffer I/O error on device sr0, logical block 6 [679396.185020] Buffer I/O error on device sr0, logical block 7 [679396.185031] Buffer I/O error on device sr0, logical block 8 [679396.185038] Buffer I/O error on device sr0, logical block 9 [679396.185070] sr 1:0:0:0: [sr0] unaligned transfer [679396.185108] sr 1:0:0:0: [sr0] unaligned transfer Can someone help me through this? tired of moving around with an external dvd drive. Thanks

    Read the article

  • mdadm starts resync on every boot

    - by Anteru
    Since a few days (and I'm positive it started shortly before I updated my server from 13.04-13.10) my mdadm is resyncing on every boot. In the syslog, I get the following output [ 0.809256] md: linear personality registered for level -1 [ 0.811412] md: multipath personality registered for level -4 [ 0.813153] md: raid0 personality registered for level 0 [ 0.815201] md: raid1 personality registered for level 1 [ 1.101517] md: raid6 personality registered for level 6 [ 1.101520] md: raid5 personality registered for level 5 [ 1.101522] md: raid4 personality registered for level 4 [ 1.106825] md: raid10 personality registered for level 10 [ 1.935882] md: bind<sdc1> [ 1.943367] md: bind<sdb1> [ 1.945199] md/raid1:md0: not clean -- starting background reconstruction [ 1.945204] md/raid1:md0: active with 2 out of 2 mirrors [ 1.945225] md0: detected capacity change from 0 to 2000396680192 [ 1.945351] md: resync of RAID array md0 [ 1.945357] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 1.945359] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 1.945362] md: using 128k window, over a total of 1953512383k. [ 2.220468] md0: unknown partition table I'm not sure what's up with that detected capacity change, looking at some old logs, this does have appeared earlier as well without a resync right afterwards. In fact, I let it run yesterday until completion and rebooted, and then it wouldn't resync, but today it does resync again. For instance, yesterday I got: [ 1.872123] md: bind<sdc1> [ 1.950946] md: bind<sdb1> [ 1.952782] md/raid1:md0: active with 2 out of 2 mirrors [ 1.952807] md0: detected capacity change from 0 to 2000396680192 [ 1.954598] md0: unknown partition table So it seems to be a problem that the RAID array does not get marked as clean after every shutdown? How can I troubleshoot this? The disks themselves are both fine, SMART tells me no errors, everything ok.

    Read the article

  • Can't boot into windows7/ubuntu 12.04 after running boot-repair

    - by Rini
    I have installed Ubuntu 12.04 on my preinstalled windows 7 Sony vaio E series laptop following instructions here: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ Everything went well and I am able to boot in to windows after complete installation of Ubuntu. Now following instructions on web I tried to add Ubuntu to my BIOS using Easy BCD (but forget to add windows 7 entry). As a result, I loose windows 7 OS and can't boot in to either OS then I successfully repaired windows 7 using recovery CD. Now my problem is that I can't reinstall Ubuntu 12.04 using Live CD it halts every time before disk partition step giving error. "ubi-partman crashed". "ubi-partman failed with exit code 141. further information may be found in /var/log/syslog. Do you want to try running this step again before continuing? If you do not, your installation may fail entirely or may be broken." and, any choice to continue will result in the same error. After that following some post solutions I ran boot-repair commands in terminal ( in Try Ubuntu mode) and got the following URL: http://paste.ubuntu.com/1206434/ Now, after restart I can't boot into either Windows or Ubuntu. Even any attempt to run Windows repair is failed and I got the message : 'No operating System found' I don't know what went wrong after running boot-repair command. Please help in solving this issue. Thanks and Regards, R Shukla

    Read the article

  • Webcast Tomorrow: Securing the Cloud for Public Sector

    - by Darin Pendergraft
    Securing the Cloud for Public Sector Click here, to register for the live webcast. Cloud computing offers government organizations tremendous potential to enhance public value by helping organizations increase operational efficiency and improve service delivery. However, as organizations pursue cloud adoption to achieve the anticipated benefits a common set of questions have surfaced. “Is the cloud secure? Are all clouds equal with respect to security and compliance? Is our data safe in the cloud?” Join us December 12th for a webcast as part of the “Secure Government Training Series” to get answers to your pressing cloud security questions and learn how to best secure your cloud environments. You will learn about a comprehensive set of security tools designed to protect every layer of an organization’s cloud architecture, from application to disk, while ensuring high levels of compliance, risk avoidance, and lower costs. Discover how to control and monitor access, secure sensitive data, and address regulatory compliance across cloud environments by: providing strong authentication, data encryption, and (privileged) user access control to ensure that information is only accessible to those who need it mitigating threats across your databases and applications protecting applications and information – no matter where it is – at rest, in use and in transit For more information, access the Secure Government Resource Center or to speak with an Oracle representative, please call1.800.ORACLE1. LIVE Webcast Securing the Cloud for Public Sector Date: Wednesday, December 12, 2012 Time: 2:00 p.m. ET Visit the Secure Government Resource CenterClick here for information on enterprise security solutions that help government safeguard information, resources and networks. ACCESS NOW Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy Statement

    Read the article

  • "Size mismatch" apt error when installing openJDK

    - by siddanth
    when i try install openjdk-7-jre-headless i am getting the following error: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-lib tzdata tzdata-java Suggested packages: default-jre equivs cups-common liblcms2-utils libnss-mdns sun-java6-fonts ttf-dejavu-extra ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-sazanami-gothic ttf-kochi-gothic ttf-sazanami-mincho ttf-kochi-mincho ttf-wqy-microhei ttf-wqy-zenhei ttf-indic-fonts-core ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following NEW packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-headless openjdk-7-jre-lib tzdata-java The following packages will be upgraded: tzdata 1 upgraded, 12 newly installed, 0 to remove and 122 not upgraded. Need to get 41.2 MB/43.5 MB of archives. After this operation, 64.0 MB of additional disk space will be used. Get:5 http://in.archive.ubuntu.com/ubuntu/ oneiric/main java-common all 0.42ubuntu2 [62.4 kB] Fetched 41.1 MB in 4min 5s (167 kB/s) Failed to fetch http://in.archive.ubuntu.com/ubuntu/pool/main/j/java-common/java-common_0.42ubuntu2_all.deb Size mismatch E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? am unable to solve this. Am i missing something? please help me out in solving this.

    Read the article

  • Fail to upgrade from 10.10 to 11.04

    - by Ana Solís
    I was using Natty for a while, but while updating to the new release there was a blackout and it wasn't able to finish and Ubuntu failed to load after that. I thought, no worries I got my files backed up and I still got my 10.10 CD I used to put Ubuntu in my computer in the first place. So I installed it again, with the plan of using the update manager to get myself with the current release... Except I get this error: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/natty/main/source/Sources.gz 404 Not Found , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/natty/main/binary-amd64/Packages.gz 404 Not Found , E:Some index files failed to download, they have been ignored, or old ones used instead. My internet connection is just fine, seeing as I'm able to post this, but I don't know what else to do. Tried to download Quantal in another computer and putting it on a DVD (since it won't fit in a CD...) and the stupid thing fails to load it, it skips it over and goes right back to Maverick... (not a faulty disk, it installed Ubuntu just fine in a friends computer...)

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >