Search Results

Search found 3281 results on 132 pages for 'repo man'.

Page 33/132 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • how to rollback/undo yum update on fedora after messing the fedora versions

    - by misteryes
    I want to install texlive on my fedora 16 laptop with the following procedure: # yum remove tex-* texlive-* # cat > /etc/yum.repos.d/texlive.repo <<EOF [texlive] name=texlive baseurl=http://jnovy.fedorapeople.org/texlive/2012/packages.f17/ enabled=1 gpgcheck=0 EOF # yum update; # yum install texlive after yum update, I notice that my laptop is fedora 16, while I used 2012/packages.fc17/ so I modify /etc/yum.repos.d/texlive.repo to use 2011/packages.fc16 and do yum update again however, there are many errors [root@kitty esolve]# yum update Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found : http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml Trying other mirror. Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package dvipng.x86_64 0:1.14-1.fc15 will be obsoleted ---> Package kpathsea.x86_64 0:2007-66.fc16 will be obsoleted --> Processing Dependency: libkpathsea.so.4()(64bit) for package: evince-dvi-3.2.1-2.fc16.x86_64 ---> Package mkvtoolnix.x86_64 0:5.8.0-1 will be updated ---> Package mkvtoolnix.x86_64 0:6.3.0-1 will be an update ---> Package nautilus-dropbox.x86_64 0:1.4.0-1.fc10 will be updated ---> Package nautilus-dropbox.x86_64 0:1.6.0-1.fc10 will be an update ---> Package texlive-dvipng-bin.x86_64 2:svn26509.0-19.20130317_r29408.fc17 will be obsoleting --> Processing Dependency: texlive-kpathsea-lib = 2:2012-19.20130317_r29408.fc17 for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: texlive-base for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: tex-dvipng for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: libpng15.so.15()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: libkpathsea.so.6()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 ---> Package texlive-kpathsea.noarch 2:svn28792.0-19.fc17 will be obsoleting --> Processing Dependency: texlive-kpathsea-bin for package: 2:texlive-kpathsea-svn28792.0-19.fc17.noarch --> Running transaction check ---> Package kpathsea.x86_64 0:2007-66.fc16 will be obsoleted --> Processing Dependency: libkpathsea.so.4()(64bit) for package: evince-dvi-3.2.1-2.fc16.x86_64 ---> Package texlive-base.noarch 2:2012-19.20130317_r29408.fc17 will be installed ---> Package texlive-dvipng.noarch 2:svn26689.1.14-19.fc17 will be installed ---> Package texlive-dvipng-bin.x86_64 2:svn26509.0-19.20130317_r29408.fc17 will be obsoleting --> Processing Dependency: libpng15.so.15()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 ---> Package texlive-kpathsea-bin.x86_64 2:svn27347.0-19.20130317_r29408.fc17 will be installed ---> Package texlive-kpathsea-lib.x86_64 2:2012-19.20130317_r29408.fc17 will be installed --> Finished Dependency Resolution Error: Package: evince-dvi-3.2.1-2.fc16.x86_64 (@fedora) Requires: libkpathsea.so.4()(64bit) Removing: kpathsea-2007-66.fc16.x86_64 (@so-updates) libkpathsea.so.4()(64bit) Obsoleted By: 2:texlive-kpathsea-svn28792.0-19.fc17.noarch (texlive) Not found Error: Package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 (texlive) Requires: libpng15.so.15()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest and when I do yum install texlive, it simply tries to install the f17 version, which failed. what Can I do to install f16 version? how can I undo yum update with 2012/packages.f17/ I tried yum history, and for today's history, I only have Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit ID | Login user | Date and time | Action(s) | Altered ------------------------------------------------------------------------------- 124 | esolve ... <esolve> | 2013-09-12 18:35 | Erase | 24 123 | root <root> | 2013-08-23 11:08 | Update | 1 122 | root <root> | 2013-08-21 14:13 | Update | 1 < 121 | esolve ... <esolve> | 2013-05-31 15:36 | Install | 1 > 120 | root <root> | 2013-05-29 15:13 | Install | 1 < 119 | root <root> | 2013-04-18 13:13 | Update | 1 >< which seems not related to yum update the history results: 1003 yum update 1004 vim 1005 vim /etc/yum.repos.d/texlive.repo 1006 yum update 1007 yum install texlive 1008 vim /etc/yum.repos.d/texlive.repo 1009 clear 1010 yum history 1011 yum history list 1012 vim 1013 vim /etc/yum.repos.d/texlive.repo 1014 yum history list 1015 history also I tried yum history undo 124 but it failed! [root@kitty esolve]# yum history undo 124 Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found : http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml Trying other mirror. Undoing transaction 124, from Thu Sep 12 18:35:31 2013 Erase R-2.14.1-1.fc16.x86_64 ? Erase R-core-2.14.1-1.fc16.x86_64 ? Erase R-devel-2.14.1-1.fc16.x86_64 ? Erase a2ps-4.14-12.fc15.x86_64 ? Erase docbook-utils-pdf-0.6.14-29.fc16.noarch ? Erase html2ps-1.0-0.7.b7.fc15.noarch ? Erase jadetex-3.13-10.fc15.noarch ? Erase kile-2.1.1-1.fc16.x86_64 ? Erase linuxdoc-tools-0.9.66-9.fc15.x86_64 ? Erase tetex-dvipost-1.1-12.fc15.x86_64 ? Erase tex-cm-lgc-0.5-18.fc15.noarch ? Erase tex-preview-11.86-6.fc16.noarch ? Erase texinfo-tex-4.13a-15.fc15.x86_64 ? Erase texlive-2007-66.fc16.x86_64 ? Erase texlive-dvips-2007-66.fc16.x86_64 ? Erase texlive-latex-2007-66.fc16.x86_64 ? Erase texlive-texmf-2007-40.fc16.noarch ? Erase texlive-texmf-dvips-2007-40.fc16.noarch ? Erase texlive-texmf-fonts-2007-40.fc16.noarch ? Erase texlive-texmf-latex-2007-40.fc16.noarch ? Erase texlive-utils-2007-66.fc16.x86_64 ? Erase texmaker-1:3.2.2-1.fc16.x86_64 ? Erase texmf-RR-Inria-4.11-inria.0.noarch ? Erase xdvik-22.84.14-9.fc15.x86_64 ? Error: No package(s) available to install

    Read the article

  • Enable remote VNC from the commandline?

    - by Stefan Lasiewski
    I have one computer running Ubuntu 10.04, and is running Vino, the default VNC server. I have a second Windows box which is running a VNC client, but does not have any X11 capabilities. I am ssh'd into the Ubuntu host from the Windows host, but I forgot to enable VNC access on the Ubuntu host. On the Ubuntu host, is there a way for me to enable VNC connections from the Ubuntu commandline? Update: As @koanhead says below, there is no man page for vino (e.g. man -k vino and info vino return nothing), and vino --help doesn't show any help).

    Read the article

  • Tears of Steel [Short Movie]

    - by Asian Angel
    In the future a young couple reach a parting of the ways because the young man can not handle the fact that she has a robotic arm. The bitterness of the break-up and bad treatment from her fellow humans lead to a dark future 40 years later where robots are relentlessly hunting and killing humans. Can the man who started her down this dark path redeem himself and save her or will it all end in ruin? TEARS OF STEEL – DOWNLOAD & WATCH [Original Blog Post & Download Links] Tears of Steel – Blender Foundation’s fourth short Open Movie [via I Love Ubuntu] HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • What units does the ntp drift file use?

    - by arielf
    When the ntpd daemon is running, the file: /var/lib/ntp/ntp.drift gets updated periodically. Example: 17:20 hostname 118 ~> ls -l /var/lib/ntp/ntp.drift -rw-r--r-- 1 ntp ntp 7 May 20 16:46 /var/lib/ntp/ntp.drift # So it looks like it was last updated ~34 minutes ago The file has one number in it, for example, looking at a 4 virtual hosts, I find these values, respectively: -22.086 -10.214 -13.669 6.045 I assume these are seconds per day(?), but not sure. man ntpd mentions a different drift file /etc/ntp.drift which doesn't seem to exist. The man page doesn't explain what units are being used for the drift. Questions: Is /etc/ntp.drift actually /var/lib/ntp/ntp.drift on Ubuntu? What units is the drift expressed in? Thanks!

    Read the article

  • Oracle ODP.NET und Windows PowerShell

    - by cjandaus
    In der Microsoft Welt wohlbekannt, in der Oracle Welt nur ein Schulterzucken hervorrufend - die sogenannten Scripting Guys. Wie der Name bereits vermuten lässt, geht es in deren Hey, Scripting Guy! Blog um Scripting. Und damit natürlich um die Windows PowerShell. Ja, die Zeiten des DOS-Kommandofensters und Batch-Dateien ist vorbei. Die PowerShell ist eine mächtige Scripting-Umgebung unter Windows, die selbst unter Unix/Linux-Administratoren Gefallen finden sollte. Dass man damit wunderbar auch auf Oracle Datenbanken zugreifen kann, haben wir bereits vor Jahren in einer Oracle Workshop Reihe bewiesen. Damals begleitete mich Klaus Rohe von Microsoft, der mit mir dann auch gemeinsam einen Vortrag auf DOAG Konferenz hielt. Unser gemeinsames Ziel war es damals wie heute, die Oracle Anwender von der hervorragenden Integration zwischen Oracle, Windows und .NET zu überzeugen. Was lag näher, als sich dies von beiden Herstellern gemeinsam bestätigen zu lassen? Vor allem die ewigen Zweifler begrüßten dies. Seither war die PowerShell bei mir nicht mehr auf dem Radar und auch Oracle Anwender haben das Thema nicht mehr aufgeworfen. Möglicherweise auch deshalb, weil es zu neu oder zu unbekannt ist? Eher unwahrscheinlich ... Vielleicht liegt es vielmehr daran, dass man einfach mal davon ausgeht, dass PowerShell nur für Microsoft Produkte richtig nutzbar ist? Oder man bekommt erzählt, dass nur die Integration mit der Microsoft-eigenen Datenbank SQL Server möglich ist? Und das ist natürlich nicht richtig - so wie immer (ich denke dabei unter anderem an das Microsoft Active Directory - aber dazu ein andermal mehr). Umso mehr freut es mich, einen brandneuen Blog-Beitrag zu genau diesem Thema zu lesen, auf den mich Alex Keh, (Produkt Manager für Windows und .NET im Oracle Headquarter in San Francisco) aufmerksam gemacht hat. Was die Sache noch besser macht, dieser Beitrag stammt aus der Microsoft Welt und belegt damit zwischen den Zeilen, dass die Oracle Datenbank und unsere .NET Integration via dem Oracle Data Provider for .NET (ODP.NET) auch hier eine bedeutende Rolle spielt. In diesem Sinne: Beide Daumen hoch für die Scripting Guys! Der Beitrag nennt sich Use Oracle ODP.NET and PowerShell to Simplify Data Access und trotz ein paar weniger Ausreißer, ist der Artikel sehr zu empfehlen, um in das Thema einzusteigen. Lassen Sie es mich wissen, wie Sie zu dieser Integration stehen, ob die PowerShell für Sie in der Praxis wichtig ist oder werden könnte, und falls Sie Features vermissen, die Oracle künftig umsetzen sollte. Danke!

    Read the article

  • What are the drawbacks of Python?

    - by Rook
    Python seems all the rage these days, and not undeservingly - for it is truly a language with which one almost enjoys being given a new problem to solve. But, as a wise man once said (calling him a wise man only because I've no idea as to who actually said it; not sure whether he was that wise at all), to really know a language one does not only know its syntax, design, etc., advantages but also its drawbacks. No language is perfect, some are just better than others. So, what would be in your opinion, objective drawbacks of Python. Note: I'm not asking for a language comparison here (i.e. C# is better than Python because ... yadda yadda yadda) - more of an objective (to some level) opinion which language features are badly designed, whether, what are maybe some you're missing in it and so on. If must use another language as a comparison, but only to illustrate a point which would be hard to elaborate on otherwise (i.e. for ease of understanding)

    Read the article

  • The Information Driven Value Chain - Part 1

    - by Paul Homchick
    One hundred years ago, there were places on Earth that no man had ever seen.  Today, a man standing in one of those places can instantaneously communicate with someone who may be strolling down the street on his way to lunch half way around the globe.  Our world is shrinking and becoming virtual. It is a world of incredible bounty and speed where we can get a product delivered to us anywhere on earth within a day or two. However, this world is also one of challenge where volatility, uncertainty, risk and chaos are our daily companions. To prosper amid the realities of this new world, the enterprise needs a business model. Globalization and instant communications demand greater operational flexibility than ever before. Extended supply chains have elevated the management of risk to a central concern, and regulatory demands from multiple governments place an increasing burden of compliance on companies. Finally, the speed of today's business requires continuous innovation to keep from falling behind the global competition.

    Read the article

  • What are the default mount settings for mount / fstab?

    - by John Craick
    What are the default mounting options for a non root partition ? The man entry for mount says ... defaults - use default options: rw, suid, dev, exec, auto, nouser, and async. ... so that might be what we expect to see. But, unless I'm missing something, that's not what happens. I have an ext3 partition labelled "NewHome20G" which is seen as /dev/sdc6 by the system. This we can see from ... root@john-pc1204:~# blkid | grep NewHome20G /dev/sdc6: LABEL="NewHome20G" UUID="d024bad5-906c-46c0-b7d4-812daf2c9628" TYPE="ext3" I have an entry in fstab as follows ... root@john-pc1204:~# cat /etc/fstab | grep NewHome LABEL=NewHome20G /media/NewHome20G ext3 rw,nosuid,nodev,exec,users 0 2 Note the option settings that are specified in that fstab line. Now I look at how the partition is actually mounted after boot up ... root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] ... so, when the filesystem gets mounted the exec & users options I specified seem to have been ignored. Just to be sure, I unmount sdc6, remount it and look at the mount options again ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] .... same result Now I unmount the partition again, remount it specifying the exec option and look at the result ... root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o exec root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,nosuid,nodev) [NewHome20G] ... and here the exec option has finally taken effect and the noexec setting has vanished. Just for interest, I re-mount the partition with the defaults option root@john-pc1204:~# umount /dev/sdc6 root@john-pc1204:~# mount /dev/sdc6 -o defaults root@john-pc1204:~# mount -l | grep sdc6 /dev/sdc6 on /media/NewHome20G type ext3 (rw,noexec,nosuid,nodev) [NewHome20G] The noexec is back, so it looks very like rw,noexec,nosuid,nodev are the default options which is NOT what man says. Why does this matter ? I have a folder full of useful scripts stored on a data disk. Because that disk is mounted noexec those scripts won't run, even though they have all been set with chmod 777. I can work round this in several ways but it's disappointing that the man entry seems to be wrong. Have I missed something obvious here or have the default options in Ubuntu changed from what they were a few versions ago ?

    Read the article

  • Liverpool: Transfer Predictions

    - by BizTalk Visionary
    Some simple predictions based on one fact: Rafa Benitez stays at Liverpool 1. Torres leaves for 60 million – destination Man City 2. Gerrard leaves for 30 million – destination Real Madrid – Mourinho gets his man   3. Mascherano leaves for 25 million – destination Spain 4. Riera leaves for who knows where – 10 million 5. Aquilani leaves for who knows whre – 12 million 6. Yanks pay off 100 million of debt 7. Yanks give Rafa 37 million to spend – Rafa buys another load of dross from Spain, Italy and else where! That's it!

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • How to get working cups command line tools on Server 14.04

    - by Nick
    It looks like some of the commands like lpr and lprm have broken versions that don't work with cups. These commands worked properly on 10.04. lpr for cups has an -o option, but no lpr is intalled when cups is installed, and the lpr installed with apt-get install lpr does not have the -o option and does not appear to be the cups version of lpr. man lpr shows BSD General Commands Manual at the top, where man lpr on the Ubuntu 10.04 server said Apple, inc in the same spot. which leads me to believe the "wrong" lpr is in the "lpr" package or package names got moved around. There is also a lprng package, but trying to install it wants to remove cups and cups-client. lprm also returns lprm: PrinterName: unknown printer when PrinterName is in fact a valid printer installed with cups and does appear in lpstat -t. How do I get the proper cups versions of lpr working on Server 14.04?

    Read the article

  • Some Oracle VM 3 updates

    - by wcoekaer
    Today we did another patch set update for Oracle VM 3 (3.0.3-build 227). This can be downloaded from My Oracle Support as patch ID 14736185. There are quite a few updates in here and I highly recommend any Oracle VM 3 customer or user to install this update. This patch can be installed on top of Oracle VM 3.0 versions 3.0.2 and 3.0.3. The patch is cumulative for 3.0.3. So if you already installed patch update 1 (3.0.3-150) then this will just be incremental on top of that and brings you to 3.0.3-build 227. There is a readme file which contains the patchlist in the patch info. The following patches are released on ULN for Oracle VM server 3.0 : initscripts-8.45.30-2.100.18.el5.x86_64 The inittab file and the /etc/init.d scripts. kernel-ovs-2.6.32.21-45.6.x86_64 The Linux kernel kernel-ovs-firmware-2.6.32.21-45.6.x86_64 Firmware files used by the Linux kernel osc-oracle-ocfs2-0.1.0-35.el5.noarch Oracle Storage Connect ocfs2 Plugin osc-plugin-manager-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Infrastructure osc-plugin-manager-devel-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Development ovs-agent-3.0.3-41.6.x86_64 Agent for Oracle VM xen-4.0.0-81.el5.1.x86_64 Xen is a virtual machine monitor xen-devel-4.0.0-81.el5.1.x86_64 Development libraries for Xen tools xen-tools-4.0.0-81.el5.1.x86_64 Various tooling for the manipulation of Xen instances Errata emails will be sent in the next few days with details on the above updates. Or you will find them here. I also did an update of my Oracle VM utilities to 0.4.0. They are also available from My Oracle Support, patch ID 14736239. These utils can be unzipped and installed on the server running Oracle VM Manager. Typically in /u01/app/oracle/ovm-manager-3/ovm_utils. There is a set of man pages in /u01/app/oracle/ovm-manager-3/ovm_utils/man/man8. There now are 6 commands : ovm_vmcontrol : VM level operations ovm_servercontrol : server level operations ovm_vmdisks : virtual disk/physical location mapping for VM disks ovm_vmmessage : message passing utility between the manager and the VM tools (in the Oracle VM templates) ovm_repocontrol : repository level operations ovm_poolcontrol : pool level operations Some of the new changes : at a pool level, acknowledge events and cascade to servers and virtual machines with outstanding events at a pool level, do a rescan of the storage for fibrechannel/iscsi disks if you add new devices (it does this operation then on every running server) at a repository level, fixup a device if it had a failed create repository at a repository level, refresh the repository and this will update the free space in the UI for ocfs2 repositories at a server level, acknowledge server events and cascade to virtual machines if needed at a VM level, acknowledge VM events at a VM level, bind vcpus to cores with vcpuset/vcpuget Please see the man pages and remember that these tools are just written As Is - no SRs... (per the documentation) Hopefully they are useful.

    Read the article

  • 8 Bit Beats – Video Game Themes Remixed

    - by Jason Fitzpatrick
    What do you get when you cross classic video game themes with a club beat? These Subwoofer-maxing remixes take Link and Mario to the dance floor. Courtesy of NickplosionFX, the above video remixes The Legend of Zelda and Pac-Man with a healthy dose of back beat. Other offerings from 8 Bit Beats include a Super Mario Bros. 3 remix. Have a source for other great video game remixes? Sound off in the comments. 8 Bit Beats – Zelda/Pac-Man [YouTube] What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It?

    Read the article

  • Enable remote VNC from the commandline?

    - by Stefan Lasiewski
    I have one computer running Ubuntu 10.04, and is running Vino, the default VNC server. I have a second Windows box which is running a VNC client, but does not have any X11 capabilities. I am ssh'd into the Ubuntu host from the Windows host, but I forgot to enable VNC access on the Ubuntu host. On the Ubuntu host, is there a way for me to enable VNC connections from the Ubuntu commandline? Update: As @koanhead says below, there is no man page for vino (e.g. man -k vino and info vino return nothing), and vino --help doesn't show any help).

    Read the article

  • APEX Theme 25 (Blue/Responsive): Was ist "responsive" ...?

    - by carstenczarski
    Mit APEX 4.2 wurden neben vielen anderen neuen Funktionen, neue "Responsive" Themes eingeführt, mit denen man seine neuen (oder alten) Anwendungen ausstatten kann. Doch was ist ein "Responsive Theme" ...? In unserem aktuellen Community Tipp geben wir eine kurze Einführung in das Thema "Responsive Web Design" und wie man es in APEX nutzen kann. Darüber hinaus sind praktische Tipps und Tricks zum Umgang mit dem Theme 25 enthalten: Wussten Sie schon, dass Sie Seitenteile mit einer einfachen CSS-Anweisung bspw. für Smartphones abschalten können ...?

    Read the article

  • What do you call the process of converting line breaks into html elements?

    - by Ben Lee
    On sites with user-created content (such as programmers SE) or blogging software back-ends, line breaks entered by the user in the content area are frequently converted into <br> and/or <p> tags when rendered on the front-end. For example, this: A limerick There once was a man from Nantucket Who kept all his cash in a bucket. Might render html like this: <p> A limerick </p> <p> There once was a man from Nantucket<br> Who kept all his cash in a bucket. </p> What is the standard name for this process of converting line breaks into html?

    Read the article

  • is this the correct way to use glTexCoordPointer?

    - by RubyKing
    Hey all Just trying to work out how to use this function glTexCoordPointer. Here is the man pages http://www.opengl.org/sdk/docs/man/xhtml/glTexCoordPointer.xml which states that I must set a pointer to the first element of the array that uses the texture cordinate. Here is my array static const GLfloat GUIVertices[] = { //FIRST QUAD 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f, 0.94f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.94f, 0.0f, 1.0f, 1.0f, 1.0f, //2ND QUAD // x y z w X Y 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f, -1.0f,-1.0f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f,-0.94f, 0.0f,1.0f, 0.0f, 1.0f, 1.0f, -0.94f, 0.0f,1.0f, 1.0f, 1.0, }; But how do I set the pointer correctly? like this glTexCoordPointer(1, GL_FLOAT, 6, reinterpret_cast(29 * sizeof(float)) ); for the fifth element on the 2nd quad first row. any help is thankful

    Read the article

  • Hilfreiche Linemode Skripte

    - by Ulrike Schwinn (DBA Community)
    Die mitgelieferten Skripte im Verzeichnis $ORACLE_HOME/rdbms/admin bieten schon seit jeher DBAs und Entwickler eine zusätzliche Unterstützung bei ihrer Arbeit. Sie stehen automatisch nach jeder Installation im Verzeichnis rdbms/admin zur weiteren Verwendung zur Verfügung. Nur wie findet man genau das Skript, das einem die richtige Unterstützung bietet? Eine Dokumentation aller Skripte existiert nicht. Man kann sich an den Namen der Skripte orientieren, da sie mit sprechenden Namen aufgelistet sind und kann die Kurzeinleitung im Skript nachlesen. Das ist mühselig und kostet Zeit. Daher wird im neuen Tipp ein Überblick über wichtige Skripte einschließlich einer Kurzbeschreibung gegeben. Mehr dazu hier

    Read the article

  • Does the .ending on a domain need to be relevant? [closed]

    - by Mat Doidge
    Possible Duplicate: Does Google penalize .me or .tv sites? I see a lot of people now opting to use myname.im or myname.me But after doing some checking, i found that .im domain names are meant to be Isle of Man endings. Is this correct, and does it matter that people opt to use this domain ending if they are not even based anywhere near the Isle of Man. It is Ok to use a domain ending purely for how good it sounds is what I'm really after. Or is it bad practice to do this.

    Read the article

  • How can I implement the Gale-Shapley stable marriage algorithm in Perl?

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance Hey guys this is not an home work. This a little modified version of stable marriage algorithm. I have working solution. I am only working on optimizing my code. more info: It is very similar to stable marriage problem but here we need to calculate the scores based on the interests they share. So i have implemented it as the way you see in the wiki page http://en.wikipedia.org/wiki/Stable_marriage_problem. my problem is not solving the problem. i solved it and can run it. I am just trying to have a better solution. so i am asking suggestions on the type of data structure to use. Conceptually I tried using an array of hashes. where the array index give the person id and the hash in it gives the id's <= score's in sorted manner. I initially start with an array of hashes. now i sort the hashes on values, but i could not store the sorted hashes back in an array.So just stored the keys after sorting and used these to get the values from my initial unsorted hashes. Can we store the hashes after sorting ? Can you suggest a better structure ?

    Read the article

  • How to get javascript object references or reference count?

    - by Tauren
    How to get reference count for an object Is it possible to determine if a javascript object has multiple references to it? Or if it has references besides the one I'm accessing it with? Or even just to get the reference count itself? Can I find this information from javascript itself, or will I need to keep track of my own reference counters. Obviously, there must be at least one reference to it for my code access the object. But what I want to know is if there are any other references to it, or if my code is the only place it is accessed. I'd like to be able to delete the object if nothing else is referencing it. If you know the answer, there is no need to read the rest of this question. Below is just an example to make things more clear. Use Case In my application, I have a Repository object instance called contacts that contains an array of ALL my contacts. There are also multiple Collection object instances, such as friends collection and a coworkers collection. Each collection contains an array with a different set of items from the contacts Repository. Sample Code To make this concept more concrete, consider the code below. Each instance of the Repository object contains a list of all items of a particular type. You might have a repository of Contacts and a separate repository of Events. To keep it simple, you can just get, add, and remove items, and add many via the constructor. var Repository = function(items) { this.items = items || []; } Repository.prototype.get = function(id) { for (var i=0,len=this.items.length; i<len; i++) { if (items[i].id === id) { return this.items[i]; } } } Repository.prototype.add = function(item) { if (toString.call(item) === "[object Array]") { this.items.concat(item); } else { this.items.push(item); } } Repository.prototype.remove = function(id) { for (var i=0,len=this.items.length; i<len; i++) { if (items[i].id === id) { this.removeIndex(i); } } } Repository.prototype.removeIndex = function(index) { if (items[index]) { if (/* items[i] has more than 1 reference to it */) { // Only remove item from repository if nothing else references it this.items.splice(index,1); return; } } } Note the line in remove with the comment. I only want to remove the item from my master repository of objects if no other objects have a reference to the item. Here's Collection: var Collection = function(repo,items) { this.repo = repo; this.items = items || []; } Collection.prototype.remove = function(id) { for (var i=0,len=this.items.length; i<len; i++) { if (items[i].id === id) { // Remove object from this collection this.items.splice(i,1); // Tell repo to remove it (only if no other references to it) repo.removeIndxe(i); return; } } } And then this code uses Repository and Collection: var contactRepo = new Repository([ {id: 1, name: "Joe"}, {id: 2, name: "Jane"}, {id: 3, name: "Tom"}, {id: 4, name: "Jack"}, {id: 5, name: "Sue"} ]); var friends = new Collection( contactRepo, [ contactRepo.get(2), contactRepo.get(4) ] ); var coworkers = new Collection( contactRepo, [ contactRepo.get(1), contactRepo.get(2), contactRepo.get(5) ] ); contactRepo.items; // contains item ids 1, 2, 3, 4, 5 friends.items; // contains item ids 2, 4 coworkers.items; // contains item ids 1, 2, 5 coworkers.remove(2); contactRepo.items; // contains item ids 1, 2, 3, 4, 5 friends.items; // contains item ids 2, 4 coworkers.items; // contains item ids 1, 5 friends.remove(4); contactRepo.items; // contains item ids 1, 2, 3, 5 friends.items; // contains item ids 2 coworkers.items; // contains item ids 1, 5 Notice how coworkers.remove(2) didn't remove id 2 from contactRepo? This is because it was still referenced from friends.items. However, friends.remove(4) causes id 4 to be removed from contactRepo, because no other collection is referring to it. Summary The above is what I want to do. I'm sure there are ways I can do this by keeping track of my own reference counters and such. But if there is a way to do it using javascript's built-in reference management, I'd like to hear about how to use it.

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • rpm file conflict after alien conversion

    - by Zitrax
    I have a program for which I generate a .deb file. The .deb file works fine on the systems I have tried it on (also tested with lintian). Previously it has worked to use alien to convert this to .rpm and install it on Suse. However it is now about a year since I tried it the last time and now I get an error when trying to install the alien made rpm on Fedora 11, I get this error: file /usr/share/icons/default.kde from install of testpkg-0.2-2.i386 conflicts with file from package kdelibs3-3.5.10-13.fc11.1.i586 Listing the content of the rpm file: $ rpm -qlp testpkg-0.2-2.i386.rpm / /usr /usr/games /usr/games/testpkg /usr/lib /usr/lib/libfmod-3.75.so /usr/share /usr/share/app-install /usr/share/app-install/icons /usr/share/app-install/icons/testpkg.png /usr/share/applications /usr/share/applications/testpkg.desktop /usr/share/doc /usr/share/doc/testpkg /usr/share/doc/testpkg/changelog.gz /usr/share/doc/testpkg/copyright /usr/share/games /usr/share/games/testpkg /usr/share/games/testpkg/images /usr/share/games/testpkg/images/bb.dat /usr/share/games/testpkg/images/bb_bg.dat /usr/share/games/testpkg/images/bubblemad_8x8.png /usr/share/games/testpkg/images/goldfont.png /usr/share/games/testpkg/lvl /usr/share/games/testpkg/lvl/lvl001.txt /usr/share/games/testpkg/lvl/lvl002.txt /usr/share/games/testpkg/lvl/lvl003.txt /usr/share/games/testpkg/lvl/lvl004.txt /usr/share/games/testpkg/lvl/lvl005.txt /usr/share/games/testpkg/lvl/lvl006.txt /usr/share/games/testpkg/lvl/lvl007.txt /usr/share/games/testpkg/music /usr/share/games/testpkg/music/alfa.it /usr/share/games/testpkg/music/beta.it /usr/share/games/testpkg/sounds /usr/share/games/testpkg/sounds/bounce.wav /usr/share/games/testpkg/sounds/click.wav /usr/share/games/testpkg/sounds/warning.wav /usr/share/icons /usr/share/icons/default.kde /usr/share/icons/default.kde/16x16 /usr/share/icons/default.kde/16x16/apps /usr/share/icons/default.kde/16x16/apps/testpkg.png /usr/share/man /usr/share/man/man6 /usr/share/man/man6/testpkg.6.gz Am I wrong in putting the kde icons in /usr/share/icons/default.kde which seem to be a symbolic link ? It's a symbolic link on both Kubuntu 9.10 and Fedora 11 though. Sounds like a common situation that the same directory is needed for different packages, so why is it a conflict ?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >