Search Results

Search found 16940 results on 678 pages for 'disk drive'.

Page 528/678 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • Log shipping and shrinking transaction logs

    - by DavidWimbush
    I just solved a problem that had me worried for a bit. I'm log shipping from three primary servers to a single secondary server, and the transaction log disk on the secondary server was getting very full. I established that several primary databases had unused space that resulted from big, one-off updates so I could shrink their logs. But would this action be log shipped and applied to the secondary database too? I thought probably not. And, more importantly, would it break log shipping? My secondary databases are in a Standby / Read Only state so I didn't think I could shrink their logs. I RTFMd, Googled, and asked on a Q&A site (not the evil one) but was none the wiser. So I was facing a monumental round of shrink, full backup, full secondary restore and re-start log shipping (which would leave us without a disaster recovery facility for the duration). Then I thought it might be worthwhile to take a non-essential database and just make absolutely sure a log shrink on the primary wouldn't ship over and occur on the secondary as well. So I did a DBCC SHRINKFILE and kept an eye on the secondary. Bingo! Log shipping didn't blink and the log on the secondary shrank too. I just love it when something turns out even better than I dared to hope. (And I guess this highlights something I need to learn about what activities are logged.)

    Read the article

  • Workshop in Holland - and open questions

    - by Mike Dietrich
    Thanks to everybody visiting yesterday the Upgrade Workshop in Maarsen. I had lots of fun - and I hope you'd enjoy it, too :-) The slides, as always, can be downloaded from: http://apex.oracle.com/folien Use the Schluesselwort/Keyword: upgrade112 And thanks to all those of you sending feedback regarding "traget/destination" (will change it in the slides) and other topics such as Enterprise Manager Grid Control 11g. Enterprise Manager 11g will be launched on 22-APR-2010 - and you can join the event live if you will be accidentialy in New York:http://www.oracle.com/enterprisemanager11g/index.html Thanks for this hint!!! Regarding the open questions: Will there be PSUs available for Intel Solaris? PSUs will be made available on nearly all platforms including Intel Solaris. Please see Note:882604.1 for platform information and Note:854428.1 for direct links to the PSU download location. Is COMMIT_WRITE=NOWAIT the default in patch set 10.2.0.4? I tried to verify this and neither couldn't find a bug entry nor a documentation saying the 10.2.0.4 has a different default setting (default behaviour is WAIT). Checked it in my 10.2.0.4 instances as well and there it is set to WAIT. If this parameter is not explicitly specified, then database commit behavior defaults to writing commit records to disk before control is returned to the client. If only IMMEDIATE or BATCH is specified, but not WAIT or NOWAIT, then WAIT mode is assumed. If only WAIT or NOWAIT is specified, but not IMMEDIATE or BATCH, then IMMEDIATE mode is assumed Please feedback to me if you have different experiences. Service Request escalation by telephone? Thanks for this update - I didn't realize that ;-) Now I know why it hasn't helped last month when I've updated an SR ... here's the official information on that: Note:199389.1 - Note has been updated on 24-FEB-2010. See the telephone number to Oracle support to request an escalation here: http://www.oracle.com/support/contact.html

    Read the article

  • Automated “ubuntu-12.04.1-server-amd64” OS installation on physical machine

    - by user285336
    We are using Physical server and are in process of Automated “ubuntu-12.04.1-server-amd64” OS installation on it. There are two HDD for OS installation purpose and there are RAID1 relation between them. This setup has been done through BIOS. The kickstart configuration file looks like this: #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone Asia/Dili #Root password rootpw --iscrypted $1$Yl1QJyta$KzIT.kq3i9E5XaiQKcUJn/ #Initial user user ankit --fullname "Ankit" --iscrypted --password $1$c6Yflpea$pi1QQ59/jgywmGwBv25z3/ #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use Web installation url --url my_repo_location #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --fstype ext4 --size 100 --ondisk sda part / --fstype ext4 --size 10000 --ondisk sda part /var --fstype ext4 --size 10000 --ondisk sda part swap --size 1024 --ondisk sdb #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --enabled --trust=eth0 --http --ftp --ssh --telnet --smtp #X Window System configuration information xconfig --depth=8 --resolution=640x480 --defaultdesktop=GNOME But I am getting the below error : No root file system is defined Please suggest on this. Do we need to do any modification in kickstart configuration file. Any help in this regard will be very helpful for us. The automated Ubuntu OS installation is successful in Virtual Machine(VM) with the above ks.cfg (kickstart configuration file ) but failing in case of physical machine. Please suggest on this and if possible provide the new ks.cfg file to resolve above problem. Thanks & Regards, Rajesh Prasad

    Read the article

  • Getting More Out of UPK

    - by [email protected]
    Are you getting the most out of UPK? Remember the idea of streamlining your content creation efforts? How about the concept of collaboration during development? How are you leveraging the System Process Documents or Test Scripts? Is your training team benefiting from the creation of process documentation? Is UPK linked into the help menu of your application or your even at the browser level (Smart Help)? Many customers underutilize UPK. Some customers just think of UPK as a training creation solution or just for creating documentation. To get the full value of UPK you need to first evaluate how the UPK developer is installed. Single User or Multi User? If you have more than two developers of UPK, then there is a significant benefit from installing UPK in multi user mode. This helps drive collaboration, automatic version control and better facilitation of the workflow and state features with use of customized views for the developers. Has your organization installed Usage Tracking? How are the outputs deployed and for how many applications? If these questions have you thinking about your overall usage of UPK and you see significant improvement by using more of what UPK has to offer, then it could be time for a UPK Health Check. Contact your UPK Sales Consultant to help understand your environment and how to maximize the value of UPK and start getting more out of the product.

    Read the article

  • How to fix "apt-get upgrade" errors?

    - by mohamad farid bin abdullah
    I get these errors when I try to upgrade the packages installed on my Ubuntu system: m@m-desktop ~ $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up drbd8-source (2:8.3.7-1ubuntu2.3) ... Removing old drbd8-8.3.7 DKMS files... ------------------------------ Deleting module version: 8.3.7 completely from the DKMS tree. ------------------------------ Done. Loading new drbd8-8.3.7 DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i386 Building initial module for 2.6.35-22-generic Error! Bad return status for module build on kernel: 2.6.35-22-generic (i386) Consult the make.log in the build directory /var/lib/dkms/drbd8/8.3.7/build/ for more information. dpkg: error processing drbd8-source (--configure): subprocess installed post-installation script returned error exit status 10 dpkg: dependency problems prevent configuration of drbd8-utils: drbd8-utils depends on drbd8-source; however: Package drbd8-source is not configured yet. dpkg: error processing drbd8-utils (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: drbd8-source drbd8-utils E: Sub-process /usr/bin/dpkg returned an error code (1) m@m-desktop ~ $

    Read the article

  • How to Use the Avira Rescue CD to Clean Your Infected PC

    - by The Geek
    When you’ve got a PC completely infected with viruses, sometimes it’s best to reboot into a rescue disc and run a full virus scan from there. Here’s how to use the Avira Rescue CD to clean an infected PC. We’ve previously covered how to clean an infected PC using the BitDefender or Kaspersky rescue disks, and loads of readers have written in saying thanks, and reporting that they were able to clean their PC easily. Be sure and check out our previous articles on the subject: How to Use the BitDefender Rescue CD to Clean Your Infected PC How to Use the Kaspersky Rescue Disk to Clean Your Infected PC Otherwise, keep reading for how it all works with Avira, a well-respected anti-virus solution Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • Friday Fun: Vehicles

    - by Mysticgeek
    Friday has finally arrived and it’s time to ignore spreadsheets and TPS reports and waste time playing a flash game. Today we take a look at the fun puzzle game called Vehicles. Vehicles This is a fun game with cartoon style graphics where you navigate vehicles to solve different puzzles. You navigate the vehicles through different puzzle situations by clicking on them with your mouse. You’re given basic instructions on how to complete each level. You’ll need to strategically place the vehicles so you can knock the black vehicles off the screen. As you progress up the levels, they become more challenging and if you need to, you can restart it at any time. Since it’s Friday, and you’re sick of your job, Vehicles is a fun puzzle game to keep your mind of the boringness of work until it’s time for weekend freedom. Play Vehicles at FreeWebArcade Similar Articles Productive Geek Tips Friday Fun: Relieve Workweek Frustration Playing Mad MondayFriday Fun: Uphill RushFriday Fun: Battlefield HeroesFriday Fun: Portal, the Flash VersionFriday Fun: Play 3D Rally Racing in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   [Read more] 

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   beginctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');end; When I checked my own test instance, I found my collection had a row fragmentation of about 80% After running the optimization procedure, it went down to 0% The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems. And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

    Read the article

  • New Agile PLM Customer Testimonial Videos on YouTube

    - by Kerrie Foy
    Have you visited the Oracle Agile PLM channel on YouTube recently? There are many new video testimonials, and even an overview of how Oracle Agile PLM helps companies drive powerful corporate performance by maximizing product profitability. Here are a few highlights... Oracle Agile PLM: Proven Results Watch an overview of the transformative success our customers have realized using Oracle Agile PLM applications to take their company to the next level. Alcatel-Lucent Ups Competitive Edge with Oracle Agile PLM and Oracle EBS Brad Magnani of Alcatel-Lucent Enterprise describes how the Oracle Agile PLM and Oracle EBS solutions help speed time to market, eliminate wasted cash, secure data, and ensure product quality, enabling innovation and success. Herbalife: an Oracle Agile PLM Customer Video Filmed at OpenWorld 2010 Listen to Gary Swanson of Herbalife describe how his organization realizes powerful new insight into product information with Agile PLM Business Intelligence (BI). Tyson: an Oracle Agile PLM for Process Customer Video Filmed at OpenWorld 2010, featuring Kim Glenn Tyson: an Oracle Agile PLM for Process Customer Video Filmed at OpenWorld 2010, featuring Amber Woods We are so proud to have two testimonials from Tyson Foods! Tune in to each to see the unique perspectives on Agile PLM for Process at Tyson from different organizational views, demonstrating Oracle's ability to enable enterprise-wide PLM implementations delivering superior results. Take a moment to view these interesting customer testimonials to learn how Oracle Agile PLM applications are helping companies succeed. Subscribe to our YouTube channel today!

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • Whats consuming HDD Space

    - by Umair Mustafa
    I have single partition of 92GB in which I installed Ubuntu 12.04. And for some Unknown reason a message pop ups saying that I only have 1GB of HDD space left. I ran command sudo du -hscx * on / and /home /home gave me this result 4.0K C:\nppdf32Log\debuglog.txt 0 convertedvideo.avi 176M Desktop 16K Documents 169M Downloads 4.0K examples.desktop 17M file.txt 4.0K Music 984K Pictures 4.0K Public 320K Red Hat 6.iso 2.5M syslog-ng_3.3.6.tar.gz 4.0K Templates 8.0K terminal.png 1.2M Thunderbird Attachments 698M ubuntu10.04LTS.iso 16K Ubuntu One 4.0K Untitled Folder 4.0K Videos 21G VirtualBox VMs 22G total And / gave me this result 81G home 0 initrd.img 0 initrd.img.old 833M lib 16K lost+found 68K media 4.0K mnt 260M opt du: cannot access `proc/8339/task/8339/fd/4': No such file or directory du: cannot access `proc/8339/task/8339/fdinfo/4': No such file or directory du: cannot access `proc/8339/fd/4': No such file or directory du: cannot access `proc/8339/fdinfo/4': No such file or directory 0 proc 640K root 908K run 8.6M sbin 4.0K selinux 4.0K srv 0 sys 148K tmp 3.3G usr 436M var 0 vmlinuz 0 vmlinuz.old 86G total If you look at the result returned by / it shows that /home is consuming 81GB but on the other hand /home returns only 22GB. I cant figure out whats consuming the HDD. I have not installed anything except Virtual Machines Perpetrator found using Disk Usage Analyzer

    Read the article

  • Gparted resize of an extended partition fails with error "can't have overlapping partitions".

    - by Marcus
    I just decided to install Ubuntu 12.04 alongside Windows 7 on my Dell laptop. However I didn't do this manually but instead used the "Install Ubuntu alongside Windows 7" option during the installation. Now the partition that Ubuntu runs in has very little space and I am getting warning messages. I'm trying to use gparted 0.12.1-5 (via a live CD) to give Windows less space and give Ubuntu more. I've managed to remove 100GB from the Windows partition so I now have some unallocated space between Windows and Ubuntu. This is what it looks like inside Ubuntu (not using the live CD, since it won't let me mount a USB to save a screenshot): So first I take sda4 (extended?) and resize it to the left so it takes up all the unallocated space. Then I resize sda5 (ext4) as well so it takes up all the new space. However, when I hit apply, it fails on the first action (resizing sd4) with the error message can't have overlapping partitions. Any ideas as to why this happens? I also tried resizing sda4 by just a few MB so that it definitely didn't overlap anything, but I still got the same error message. To clarify, I am using gparted from the LiveCD, I just took the screenshot from Ubuntu. I couldn't attach the details file containing the error information from gparted because I can't mount a USB drive when I'm running from the LiveCD. I'm tried following the guide on the gparted website but it says Invalid argument or something like that. If the gparted details are needed, I may need some hints on how to solve the USB issue as well. :)

    Read the article

  • unmet dependencies and broken count>0 problem

    - by Simon
    I tried installing fbreader, following all the steps, but ended up with unmet dependencies, i also think a file is referenced in two locations at once and hence killing it.. any ideas how I can fix it? i've done alot of research and tried: simon@simon-Studio-1558:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dkms patch Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libzlcore0.12 The following NEW packages will be installed: libzlcore0.12 0 upgraded, 1 newly installed, 0 to remove and 61 not upgraded. 6 not fully installed or removed. Need to get 0 B/270 kB of archives. After this operation, 811 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179860 files and directories currently installed.) Unpacking libzlcore0.12 (from .../libzlcore0.12_0.12.10dfsg-4_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) sorry for the formatting, but it basically isn't liking: dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 Any ideas? Also I don't care about keeping the program, but the error is stopping sudo apt-get remove fbreader from working too.

    Read the article

  • Failure to toubleshoot a juju charm deployment

    - by Bruno Pereira
    My environments.yaml looks like this: environments: test: type: local control-bucket: juju-a14dfae3830142d9ac23c499395c2785999 admin-secret: 6608267bbd6b447b8c90934167b2a294999 default-series: oneiric juju-origin: distro data-dir: /home/bruno/projects/juju juju bootstrap runs perfect: 2011-11-22 19:19:31,999 INFO Bootstrapping environment 'test' (type: local)... 2011-11-22 19:19:32,004 INFO Checking for required packages... 2011-11-22 19:19:33,584 INFO Starting networking... 2011-11-22 19:19:34,058 INFO Starting zookeeper... 2011-11-22 19:19:34,283 INFO Starting storage server... 2011-11-22 19:19:40,051 INFO Initializing zookeeper hierarchy 2011-11-22 19:19:40,247 INFO Starting machine agent (origin: distro)... [sudo] password for bruno: 2011-11-22 19:23:16,054 INFO Environment bootstrapped 2011-11-22 19:23:16,079 INFO 'bootstrap' command finished successfully Deploy from a known good charm is accepted (tried it with one that I am trying to create): juju deploy --repository=/home/bruno/projects/charms_repo/ local:teamspeak 2011-11-22 19:28:49,929 INFO Charm deployed as service: 'teamspeak' 2011-11-22 19:28:49,962 INFO 'deploy' command finished successfully After this I can see that juju debug-log shows activity and I can see the network indicator going on and off and activity on my hard-disk. Wait... Looking at juju status I get: services: teamspeak: charm: local:oneiric/teamspeak-1 relations: {} units: teamspeak/0: machine: 0 public-address: 192.168.122.226 relations: {} state: start_error juju debug-log does not help and I have no files under /var/log/juju or /var/lib/juju. Last juju debug-log only shows this: 2011-11-22 19:45:20,790 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['wordpress/0']) new:set(['wordpress/0', 'teamspeak/0']) 2011-11-22 19:45:20,823 Machine:0: juju.agents.machine DEBUG: Starting service unit: teamspeak/0 ... 2011-11-22 19:45:21,137 Machine:0: juju.agents.machine DEBUG: Downloading charm local:oneiric/teamspeak-1 to /home/bruno/projects/juju/bruno-test/charms 2011-11-22 19:45:22,115 Machine:0: juju.agents.machine DEBUG: Starting service unit teamspeak/0 2011-11-22 19:45:22,133 Machine:0: unit.deploy INFO: Creating container teamspeak-0... 2011-11-22 19:47:04,586 Machine:0: unit.deploy INFO: Container created for teamspeak/0 2011-11-22 19:47:04,781 Machine:0: unit.deploy DEBUG: Charm extracted into container 2011-11-22 19:47:04,801 Machine:0: unit.deploy DEBUG: Starting container... 2011-11-22 19:47:07,086 Machine:0: unit.deploy INFO: Started container for teamspeak/0 2011-11-22 19:47:07,107 Machine:0: juju.agents.machine INFO: Started service unit teamspeak/0 How can I troubleshot what is happening here?

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • How can I fix apt-get autoremove wanting to uninstall most of my packages?

    - by Stefano
    I did change my packages in synaptic from manually installed to Automatically (they were not manually installed but automatically). Now they are marked for Autoremove. I tested it with sudo apt-get autoremove and the result is shown below (a reduced version because its almost all packages). I remember last year I had same issue and solved it via Ubuntu forums but the forum is down and I cannot reach the post! Anyone has any idea how to fix this? sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: unity-asset-pool unity-greeter unity-lens-applications unity-lens-files unity-lens-music unity-lens-photos unity-lens-video unity-scope-gdrive unity-scope-musicstores unity-scope-video-remote unity-services unity-tweak-tool unity-webapps-amazoncloudreader unity-webapps-common unity-webapps-facebookmessenger unity-webapps-gmail unity-webapps-googledocs unity-webapps-googleplus unity-webapps-launchpad unity-webapps-linkedi xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-ati xserver-xorg-video-cirrus xserver-xorg-video-fbdev xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-modesetting xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-openchrome xserver-xorg-video-qxl xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-s3 xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-trident xserver-xorg-video-vesa xserver-xorg-video-vmware xul-ext-unity xul-ext-webaccounts xul-ext-websites-integration y-ppa-manager yad zenity zenity-common zip 0 upgraded, 0 newly installed, 1440 to remove and 0 not upgraded. After this operation, 3,853 MB disk space will be freed. Do you want to continue [Y/n]?

    Read the article

  • Fixing unbootable installation on LVM root from Desktop LiveCD

    - by intuited
    I just did an installation from the 10.10 Desktop LiveCD, making the root volume an LVM LV. Apparently this is not supported; I managed it by taking these steps before starting the GUI installer app: installing the lvm2 package on the running system creating an LVM-type partition on the system hard drive creating a physical volume, a volume group and a root LV using the LVM tools. I also created a second LV for /var; this I don't think is relevant. creating a filesystem (ext4) on each of the two LVs. After taking these steps, the GUI installer offered the two LVs as installation targets; I gladly accepted, also putting /boot on a primary partition separate from the LVM partition. Installation seemed to go smoothly, and I've verified that both the root and var volumes do contain acceptable-looking directory structures. However, booting fails; if I understood correctly what happened, I was dropped into a busybox running in the initrd filesystem. Although I haven't worked through the entirety of the grub2 docs yet, it looks like the entry that tries to boot my new system is correct: menuentry 'Ubuntu, with Linux 2.6.35-22-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set $UUID_OF_BOOT_FILESYSTEM linux /vmlinuz-2.6.35-22-generic root=/dev/mapper/$LVM_VOLUME_GROUP-root ro quiet splash initrd /initrd.img-2.6.35-22-generic } Note that $VARS are replaced in the actual grub.cfg with their corresponding values. I rebooted back into the livecd and have unpacked the initrd image into a temp directory. It looks like the initrd image lacks LVM functionality. For example, if I'm reading /usr/share/initramfs-tools/hooks/lvm2 (installed with lvm2 on the livecd-booted system, not present on the installed one) correctly, an lvm executable should be situated in /sbin; that is not the case. What's the best way to remedy this situation? I realize that it would be easier to just use the alternate install CD, which apparently supports LVM, but I don't want to wait for it to download and then have to reinstall.

    Read the article

  • Podcast Show Notes: Fear and Loathing in SOA

    - by Bob Rhubart
    The latest program (#47) in the Arch2Arch podcast series is the first of three segments from another virtual mini-meet-up with architects from the OTN community, recorded on March 9, 2010. In keeping with the meet-up format, I sent an invitation to my list of past participants in Arch2Arch panel discussions. The following people showed up to take seats at the virtual table and drive the conversation: Hajo Normann is a SOA architect and consultant at EDS in Frankfurt Blog | LinkedIn | Oracle Mix | Oracle ACE Profile | Books  Jeff Davies is a Senior Product Manager at Oracle, and is the primary author of The Definitive Guide to SOA: Oracle Service Bus Homepage | Blog | LinkedIn | Oracle Mix Pat Shepherd is an enterprise architect with the Oracle Enterprise Solutions Group. Oracle Mix | LinkedIn | Blog This first segment focuses on a discussion of the persistent fear of SOA the panelists have observed among many developers and architects. Listen to Part 1 The discussion continues in next week’s segment with a look at the misinformation and misunderstanding behind the fear of SOA, and a discussion of possible solutions. So stay tuned: RSS   del.icio.us Tags: oracle,otn,arch2arch,podcast,soa,service-oriented architecture,enterprise architecture Technorati Tags: oracle,otn,arch2arch,podcast,soa,service-oriented architecture,enterprise architecture

    Read the article

  • Help with a simple incremental backup script

    - by Evan
    I'd like to run the following incomplete script weekly in as a cron job to backup my home directory to an external drive mounted as /mnt/backups #!/bin/bash # TIMEDATE=$(date +%b-%d-%Y-%k:%M) LASTBACKUP=pathToDirWithLastBackup rsync -avr --numeric-ids --link-dest=$LASTBACKUP /home/myfiles /mnt/backups/myfiles$TIMEDATE My first question is how do I correctly set LASTBACKUP to the the the directory in /backs most recently created? Secondly, I'm under the impression that using --link-desk will mean that files in previous backups will not will not copied in later backups if they still exist but will rather symbolically link back to the originally copied files? However, I don't want to retain old files forever. What would be the best way to remove all the backups before a certain date without losing files that may think linked in those backups by currents backups? Basically I'm looking to merge all the files before a certain date to a certain date if that makes more sense than the way I initially framed the question :). Can --link-dest create hard links, and if so, just deleting previous directories wouldn't actually remove linked file? Finally I'd like to add a line to my script that compresses each newly created backup folder (/mnt/backups/myfiles$TIMEDATE). Based on reading this question, I was wondering if I could just use this line gzip --rsyncable /backups/myfiles$TIMEDATE after I run rsync so that sequential rsync --link-dest executions would find already copied and compressed files? I know that's a lot, so many thanks in advance for your help!!

    Read the article

  • How can I transfer files to a Kindle Fire with a Micro-USB cable?

    - by Jeff
    I'm running Ubuntu 11.10, and when I connect my Kindle Fire to my computer via micro usb, it is not recognized automatically. Other usb devices, such as my ipod and digital camera, are recognized just fine. It does not appear to be a usb power issue, since the Kindle Fire wakes up from sleeping when it is plugged in. I never get the message on the Kindle telling me it is ready to accept files from the computer, though. Here are the last 15 lines of dmesg after plugging the kindle in: jeff@prime:~$ dmesg | tail -n 15 [45918.269671] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [45929.072149] wlan0: no IPv6 routers present [46743.224217] usb 1-1: new high speed USB device number 5 using ehci_hcd [46743.364623] scsi8 : usb-storage 1-1:1.0 [46744.366102] scsi 8:0:0:0: Direct-Access Amazon Kindle 0001 PQ: 0 ANSI: 2 [46744.366356] scsi: killing requests for dead queue [46744.372494] scsi: killing requests for dead queue [46744.384510] scsi: killing requests for dead queue [46744.392348] scsi: killing requests for dead queue [46744.392731] scsi: killing requests for dead queue [46744.396853] scsi: killing requests for dead queue [46744.397214] scsi: killing requests for dead queue [46744.400795] scsi: killing requests for dead queue [46744.401589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [46744.407520] sd 8:0:0:0: [sdb] Attached SCSI removable disk And here are my mounted filesystems: jeff@prime:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 298594984 174663712 108763480 62% / udev 1407684 4 1407680 1% /dev tmpfs 566924 896 566028 1% /run none 5120 0 5120 0% /run/lock none 1417308 300 1417008 1% /run/shm /home/jeff/.Private 298594984 174663712 108763480 62% /home/jeff I should note that, since I got Dropbox working on my Kindle, the usb is no longer strictly necessary, but as a matter of principle I'd love to get it working.

    Read the article

  • ?Portal Content Personalization

    - by john.brunswick
    To make the most effective use of a portal and content management platform, personalization is a critical component of delivering the most value to end users. Regardless of what type of constituents you may be serving, content relevance is critical to support business goals like self-service, communication within a geographically distributed organization, lead generation and customer loyalty effectively. This especially holds true when serving external parties, as they generally have a lower threshold for digging through your site to locate a particular item of interest and are apt to leave or dial a helpdesk if their efforts cannot locate the relevant information. Optimal delivery of content can be achieved through a variety of methods, but it is generally a blend of security and filtering via meta data that can drive the most return with the least amount of upfront effort and ongoing upkeep. In a portal environment various platform components have their strong suits and by combining the capabilities of enterprise portal and content platforms much of the groundwork for personalization can be achieved in a configuration-based manner. In our discussion we will cover terminology and concepts, example scenarios and technical implementation strategies to help showcase how personalization of content can be achieved within a portal from a technical and strategic standpoint. Read on to better understand the chart below and the components at our disposal to personalize content delivery. Read on... click here to view a full size chart

    Read the article

  • How to stop Cairo Dock minimizing Conky on Show Desktop?

    - by César
    Every time I use Cairo Dock Show Desktop add-on Conky minimizes: I've read about the own_window_type override option on .conkyrc and it seems to work for some people but it doesn't work for me. Conky won't show up if I use this option (it is currently set to own_window_type normal). Any suggestions? .conkyrc # Conky settings # background no update_interval 1 cpu_avg_samples 2 net_avg_samples 2 override_utf8_locale yes double_buffer yes no_buffers yes text_buffer_size 2048 #imlib_cache_size 0 temperature_unit fahrenheit # Window specifications # own_window yes own_window_type normal own_window_transparent yes own_window_hints undecorate,sticky,skip_taskbar,skip_pager,below border_inner_margin 0 border_outer_margin 0 minimum_size 200 250 maximum_width 200 alignment tr gap_x 35 gap_y 55 # Graphics settings # draw_shades no draw_outline no draw_borders no draw_graph_borders no # Text settings # use_xft yes override_utf8_locale yes xftfont Neuropolitical:size=8 xftalpha 0.8 uppercase no temperature_unit celsius default_color FFFFFF # Lua Load # lua_load ~/.lua/scripts/clock_rings.lua lua_draw_hook_pre clock_rings TEXT ${font Neuropolitical:size=42}${time %e} ${goto 100}${font Neuropolitical:size=18}${color FF3300}${voffset -75}${time %b} ${font Neuropolitical:size=10}${color FF3300}${voffset 15}${time %A}${color FF3300}${hr} ${goto 100}${font Neuropolitical:size=15}${color FFFFFF}${voffset -35}${time %Y} ${font Neuropolitical:size=30}${voffset 40}${alignc}${time %H}:${time %M} ${goto 175}${voffset -30}${font Neuropolitical:size=10}${time %S} ${voffset 10}${font Neuropolitical:size=11}${color FF3300}${alignr}HOME${font} ${font Neuropolitical:size=13}${color FFFFFF}${alignr}temp: ${weather http://weather.noaa.gov/pub/data/observations/metar/stations/ LQBK temperature temperature 30} °C${font} ${hr} ${image ~/.conky/logo.png -p 165,10 -s 35x35} ${color FFFFFF}${font Neuropolitical:size=8}Uptime: ${uptime_short} ${color FFFFFF}${font Neuropolitical:size=8}Processes: ${processes} ${color FFFFFF}${font Neuropolitical:size=8}Running: ${running_processes} ${color FF3300}${goto 125}${voffset 27}CPU ${color FFFFFF}${goto 125}${cpu cpu0}% ${color FF3300}${goto 125}${voffset 55}RAM ${color FFFFFF}${goto 125}${memperc}% ${color FF3300}${goto 125}${voffset 56}Swap ${color FFFFFF}${goto 125}${swapperc}% ${color FF3300}${goto 125}${voffset 57}Disk ${color FFFFFF}${goto 125}${fs_used_perc /}% ${color FF3300}${goto 130}${voffset 55}Net ${color FFFFFF}${goto 130}${downspeed eth0} ${color FFFFFF}${goto 130}${upspeed eth0} ${color FF3300}${font Neuropolitical:size=8}${alignr}${nodename} ${color FF3300}${font Neuropolitical:size=8}${alignr}${pre_exec cat /etc/issue.net} $machine ${color FF3300}${font Neuropolitical:size=8}${alignr}Kernel: ${kernel} ${hr}

    Read the article

  • Taskbar Meters Turn Your Taskbar into a System Resource Monitor

    - by Jason Fitzpatrick
    If you’re looking for some simple hardware monitoring tools that don’t clutter up your screen real estate but are right in front of you when you need them, Taskbar Meters sit unobtrusively right on the Windows taskbar. Open source, lightweight, and portable Taskbar Meters is actually a set of three applications. There is one for monitoring memory use, one for CPU use, and one for disk activity. Using the application is as simple as running the specific app for the monitoring you want (we have all three running in the screenshot here) and adjusting the sliders to set the update frequency and the percent utilization at which the meters turn from green, to yellow, to red. If you’re testing software loads and benchmarking Taskbar Meters doesn’t offer the kind of fine-tooth-comb view into system performance that you’ll need but for casual “What’s going on with my machine?” monitoring, it’s unobtrusive and effective. Taskbar Meters is an open source set of portable applications, Windows 7 only. Taskbar Meters [Codeplex] Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing

    Read the article

  • Java crashes on lubuntu but not Ubuntu

    - by Echogene
    I have lubuntu and Ubuntu partitions on my drive. I've been having an interesting time with the new lubuntu partition. I've encountered strange things with the game Minecraft, Java and graphics drivers on the lubuntu partition. Firstly, I'll say that Minecraft runs fine at about 60fps on the Ubuntu partition with the latest drivers. (This is lower than it should be as it's a pretty decent graphics card [Radeon HD 5700].) When I first started lubuntu, I tried to see if I could get Minecraft running on Java. Java crashed when loading the main game graphics on both Sun and OpenJDK without proprietary drivers. Java also crashed on both Javas with proprietary drivers after the necessary restart. However, after disabling (with 'remove' button) the proprietary drivers with jockey-gtk in the session after the restart to install the drivers, Minecraft ran very well at ~120fps. This didn't continue after another restart, when it ran at 9fps. After failing thereafter on lubuntu to get it working at 15fps, I tried reinstalling lubuntu and installed the exact same driver (the latest one, not the one appearing on jockey) and Java versions as on Ubuntu. That is, now Ubuntu and lubuntu have the same graphics driver and Java version. Minecraft still crashes in the same way on lubuntu but works fine on Ubuntu. I would appreciate any explanation for any of these events. What differences between lubuntu and Ubuntu could cause this? Edit: After installing the 32bit driver version on lubuntu (seeing as lubuntu is 32bit), I have Java "working" for Minecraft. However, it is at <15fps again and it can't log in to servers as it takes too long.

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >