Search Results

Search found 555 results on 23 pages for 'kiss stefan'.

Page 4/23 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • Enable remote VNC from the commandline?

    - by Stefan Lasiewski
    I have one computer running Ubuntu 10.04, and is running Vino, the default VNC server. I have a second Windows box which is running a VNC client, but does not have any X11 capabilities. I am ssh'd into the Ubuntu host from the Windows host, but I forgot to enable VNC access on the Ubuntu host. On the Ubuntu host, is there a way for me to enable VNC connections from the Ubuntu commandline? Update: As @koanhead says below, there is no man page for vino (e.g. man -k vino and info vino return nothing), and vino --help doesn't show any help).

    Read the article

  • Referencing environment variables *in* /etc/environment?

    - by Stefan Kendall
    I recently discovered /etc/environment, which seems a more standard way to setup simple environment variables than scripts, but I was wondering if there was a way to back-reference environment variables in the /etc/environment file. That is, I have this: JAVA_HOME="/tools/java" GRAILS_HOME="/tools/grails" GROOVY_HOME="/tools/groovy" GRADLE_HOME="/tools/gradle" PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" If I try to add $JAVA_HOME/bin to the PATH definition, however, I get $JAVA_HOME/bin, and not the interpolated variable. To remedy this, I'm creating environment.sh in profile.d to add the /bin entries to the path, but this seems sloppy and disorganized. Is there a way to backreference the environment variables in /etc/environment?

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • TechEd 2012 - last day

    - by Stefan Barrett
    Miss when TechEd was 5 days long!, it's Thursday already and we are on the last day. The snacks haven't appeared, but more developer sessions have. Having access to online schedule is very important, since the new sessions are usually the more interesting ones. On the whole, I think the wifi network has been worse this year - more blank spots, and more areas where performance is bad. I do think its funny that I get better reception on my iPad than my phones (iPad & Nokia/Microsoft). There seems to be less areas for people to plug in their own laptops this year - I do wonder, since more and more people have smart phones, and since most of the attendees are from America, perhaps they are not using the wifi - but rather their own phone provider. If I was in Japan, I would probably do the same. About to attend a session on F#, something which is probably going to be important for me over the next year.

    Read the article

  • What's up with LDoms: Part 2 - Creating a first, simple guest

    - by Stefan Hinker
    Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain.  We saw how resources were put aside for guest systems and what infrastructure we need for them.  With that, we are now ready to create a first, very simple guest domain.  In this first example, we'll keep things very simple.  Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest.  It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port.  CPU and RAM are easy.  The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain.  This is very much like plugging a cable into a computer system on one end and a network switch on the other.  For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak.  And then a mapping between that storage and the guest domain, giving it access to that virtual disk.  For this example, we'll use a ZFS volume for the backend.  We'll discuss what other options there are for this and how to chose the right one in a later article.  Here we go: root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars That's all, mars is now ready to power on.  There are just three commands between us and the OK prompt of mars:  We have to "bind" the domain, start it and connect to its console.  Binding is the process where the hypervisor actually puts all the pieces that we've configured together.  If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else).  Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP.  By default, the domain will then try to boot right away.  If we don't want that, we can set "auto-boot?" to false.  Finally, we'll use telnet to connect to the console of our newly created guest.  The output of "ldm list" shows us what port has been assigned to mars.  By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here. root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m mars active -t---- 5000 8 8G 12% 1s root@sun # telnet localhost 5000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok We're done, mars is ready to install Solaris, preferably using AI, of course ;-)  But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here: {0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root /virtual-devices@100/channel-devices@200/disk@0 net0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked.  Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started.  The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image.  The actual devices these aliases point to are shown with the command "devalias".  Here, we have one line for both "disk" and "net".  The device paths speak for themselves.  Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device.  These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk".  Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity.  This should be enough to get you going.  I will cover all the more advanced features and a little more theoretical background in several follow-on articles.  For some background reading, I'd recommend the following links: LDoms 2.2 Admin Guide: Setting up Guest Domains Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

    Read the article

  • TechEd 2012 - day 3

    - by Stefan Barrett
    The content has got more useful for me as a developer, and I've now seen 2 things which I think will make a big difference: Fake in vs2012 - allows me to stub or fake out libraries making unit testing easier/possible. C++ AMP & auto - auto might get me to start using c++ again (it makes code like for each much nicer/easier to write), while AMP is something I want to play with (moves the processing onto the GPU) The food got a little better, while there was less sign of the snacks.

    Read the article

  • Secure Deployment of Oracle VM Server for SPARC - aktualisiert

    - by Stefan Hinker
    Vor einiger Zeit hatte ich ein Papier mit Empfehlungen fuer den sicheren Einsatz von LDoms veroeffentlicht.  In der Zwischenzeit hat sich so manche veraendert - eine Aktualisierung des Papiers wurde noetig.  Neben einigen kleineren Rechtschreibkorrekturen waren auch ettliche Links veraltet oder geandert.  Der Hauptgrund fuer eine Ueberarbeitung war jedoch das Aufkommen eines zweiten Betriebsmodels fuer LDoms.  Ein einigen wenigen kurzen Worten:  Insbesondere mit dem Erfolg der T4-4 kam es immer oefter vor, dass die Moeglichkeiten zur Hardware-Partitionierung, die diese Platform bietet, genutzt wurden.  Aehnlich wie bei den Dynamic System Domains werden dabei ganze PCIe Root-Komplexe an eine Domain vergeben.  Diese geaenderte Verwendung machte eine Behandlung in diesem Papier notwendig.  Die aktualisierte Version gibt es hier: Secure Deployment of Oracle VM Server for SPARCSecond Edition Ich hoffe, sie ist hilfreich!

    Read the article

  • TechEd 2012 - last session

    - by Stefan Barrett
    Nearly over. For last session attending a talk on c++ apps in windows metro. When I came to TechEd I didn't think I would attend so many sessions on c++, but somehow this has proved more interesting this year. While .net 4.5 is interesting, been playing with it for a while now, so not a ton of surprises. Of course, i still want it at work, but who knows how long that will take - so will just have to use it at home. Once I get the licensing sorted out. So expensive. I've been pleasantly surprised with windows 8, and will be trying that out, and at least that is covered on technet. While the weather has not always been perfect during TechEd's, this is the worst I've seen it so far - it's wet outside. Next years conference in new Orleans should be interesting, well out of the conference itself that is. I do like conferences where it's held within the city itself, unlike Orlando where there isn't really anything here.

    Read the article

  • automatically starting crashplan backup when a usb harddisc is connected

    - by Stefan Armbruster
    Using CrashPlan I've configured two backup sets: online backup in crashplan's cloud (this is running perfectly) a local backup on a usb harddisc directly connected to the local laptop. The USB drive is only connected rarly when being at home. When connecting the drive it mounts automatically. Is there a way to start the local backup whenever the usb disc connected. My guess is that using udev it should be possible to "somehow" tell crashplan to reevaluate the presence of backup location. Any ideas to do this?

    Read the article

  • Das T5-4 TPC-H Ergebnis naeher betrachtet

    - by Stefan Hinker
    Inzwischen haben vermutlich viele das neue TPC-H Ergebnis der SPARC T5-4 gesehen, das am 7. Juni bei der TPC eingereicht wurde.  Die wesentlichen Punkte dieses Benchmarks wurden wie gewohnt bereits von unserer Benchmark-Truppe auf  "BestPerf" zusammengefasst.  Es gibt aber noch einiges mehr, das eine naehere Betrachtung lohnt. Skalierbarkeit Das TPC raet von einem Vergleich von TPC-H Ergebnissen in unterschiedlichen Groessenklassen ab.  Aber auch innerhalb der 3000GB-Klasse ist es interessant: SPARC T4-4 mit 4 CPUs (32 Cores mit 3.0 GHz) liefert 205,792 QphH. SPARC T5-4 mit 4 CPUs (64 Cores mit 3.6 GHz) liefert 409,721 QphH. Das heisst, es fehlen lediglich 1863 QphH oder 0.45% zu 100% Skalierbarkeit, wenn man davon ausgeht, dass die doppelte Anzahl Kerne das doppelte Ergebnis liefern sollte.  Etwas anspruchsvoller, koennte man natuerlich auch einen Faktor von 2.4 erwarten, wenn man die hoehere Taktrate mit beruecksichtigt.  Das wuerde die Latte auf 493901 QphH legen.  Dann waere die SPARC T5-4 bei 83%.  Damit stellt sich die Frage: Was hat hier nicht skaliert?  Vermutlich der Plattenspeicher!  Auch hier lohnt sich eine naehere Betrachtung: Plattenspeicher Im Bericht auf BestPerf und auch im Full Disclosure Report der TPC stehen einige interessante Details zum Plattenspeicher und der Konfiguration.   In der Konfiguration der SPARC T4-4 wurden 12 2540-M2 Arrays verwendet, die jeweils ca. 1.5 GB/s Durchsatz liefert, insgesamt also eta 18 GB/s.  Dabei waren die Arrays offensichtlich mit jeweils 2 Kabeln pro Array direkt an die 24 8GBit FC-Ports des Servers angeschlossen.  Mit den 2x 8GBit Ports pro Array koennte man so ein theoretisches Maximum von 2GB/s erreichen.  Tatsaechlich wurden 1.5GB/s geliefert, was so ziemlich dem realistischen Maximum entsprechen duerfte. Fuer den Lauf mit der SPARC T5-4 wurden doppelt so viele Platten verwendet.  Dafuer wurden die 2540-M2 Arrays mit je einem zusaetzlichen Plattentray erweitert.  Mit dieser Konfiguration wurde dann (laut BestPerf) ein Maximaldurchsatz von 33 GB/s erreicht - nicht ganz das doppelte des SPARC T4-4 Laufs.  Um tatsaechlich den doppelten Durchsatz (36 GB/s) zu liefern, haette jedes der 12 Arrays 3 GB/s ueber seine 4 8GBit Ports liefern muessen.  Im FDR stehen nur 12 dual-port FC HBAs, was die Verwendung der Brocade FC Switches erklaert: Es wurden alle 4 8GBit ports jedes Arrays an die Switches angeschlossen, die die Datenstroeme dann in die 24 16GBit HBA ports des Servers buendelten.  Das theoretische Maximum jedes Storage-Arrays waere nun 4 GB/s.  Wenn man jedoch den Protokoll- und "Realitaets"-Overhead mit einrechnet, sind die tatsaechlich gelieferten 2.75 GB/s gar nicht schlecht.  Mit diesen Zahlen im Hinterkopf ist die Verdopplung des SPARC T4-4 Ergebnisses eine gute Leistung - und gleichzeitig eine gute Erklaerung, warum nicht bis zum 2.4-fachen skaliert wurde. Nebenbei bemerkt: Weder die SPARC T4-4 noch die SPARC T5-4 hatten in der gemessenen Konfiguration irgendwelche Flash-Devices. Mitbewerb Seit die T4 Systeme auf dem Markt sind, bemuehen sich unsere Mitbewerber redlich darum, ueberall den Eindruck zu hinterlassen, die Leistung des SPARC CPU-Kerns waere weiterhin mangelhaft.  Auch scheinen sie ueberzeugt zu sein, dass (ueber)grosse Caches und hohe Taktraten die einzigen Schluessel zu echter Server Performance seien.  Wenn ich mir nun jedoch die oeffentlichen TPC-H Ergebnisse ansehe, sehe ich dies: TPC-H @3000GB, Non-Clustered Systems System QphH SPARC T5-4 3.6 GHz SPARC T5 4/64 – 2048 GB 409,721.8 SPARC T4-4 3.0 GHz SPARC T4 4/32 – 1024 GB 205,792.0 IBM Power 780 4.1 GHz POWER7 8/32 – 1024 GB 192,001.1 HP ProLiant DL980 G7 2.27 GHz Intel Xeon X7560 8/64 – 512 GB 162,601.7 Kurz zusammengefasst: Mit 32 Kernen (mit 3 GHz und 4MB L3 Cache), liefert die SPARC T4-4 mehr QphH@3000GB ab als IBM mit ihrer 32 Kern Power7 (bei 4.1 GHz und 32MB L3 Cache) und auch mehr als HP mit einem 64 Kern Intel Xeon System (2.27 GHz und 24MB L3 Cache).  Ich frage mich, wo genau SPARC hier mangelhaft ist? Nun koennte man natuerlich argumentieren, dass beide Ergebnisse nicht gerade neu sind.  Nun, in Ermangelung neuerer Ergebnisse kann man ja mal ein wenig spekulieren: IBMs aktueller Performance Report listet die o.g. IBM Power 780 mit einem rPerf Wert von 425.5.  Ein passendes Nachfolgesystem mit Power7+ CPUs waere die Power 780+ mit 64 Kernen, verfuegbar mit 3.72 GHz.  Sie wird mit einem rPerf Wert von  690.1 angegeben, also 1.62x mehr.  Wenn man also annimmt, dass Plattenspeicher nicht der limitierende Faktor ist (IBM hat mit 177 SSDs getestet, sie duerfen das gerne auf 400 erhoehen) und IBMs eigene Leistungsabschaetzung zugrunde legt, darf man ein theoretisches Ergebnis von 311398 QphH@3000GB erwarten.  Das waere dann allerdings immer noch weit von dem Ergebnis der SPARC T5-4 entfernt, und gerade in der von IBM so geschaetzen "per core" Metric noch weniger vorteilhaft. In der x86-Welt sieht es nicht besser aus.  Leider gibt es von Intel keine so praktischen rPerf-Tabellen.  Daher muss ich hier fuer eine Schaetzung auf SPECint_rate2006 zurueckgreifen.  (Ich bin kein grosser Fan von solchen Kreuz- und Querschaetzungen.  Insb. SPECcpu ist nicht besonders geeignet, um Datenbank-Leistung abzuschaetzen, da fast kein IO im Spiel ist.)  Das o.g. HP System wird bei SPEC mit 1580 CINT2006_rate gelistet.  Das bis einschl. 2013-06-14 beste Resultat fuer den neuen Intel Xeon E7-4870 mit 8 CPUs ist 2180 CINT2006_rate.  Das ist immerhin 1.38x besser.  (Wenn man nur die Taktrate beruecksichtigen wuerde, waere man bei 1.32x.)  Hier weiter zu rechnen, ist muessig, aber fuer die ungeduldigen Leser hier eine kleine tabellarische Zusammenfassung: TPC-H @3000GB Performance Spekulationen System QphH* Verbesserung gegenueber der frueheren Generation SPARC T4-4 32 cores SPARC T4 205,792 2x SPARC T5-464 cores SPARC T5 409,721 IBM Power 780 32 cores Power7 192,001 1.62x IBM Power 780+ 64 cores Power7+  311,398* HP ProLiant DL980 G764 cores Intel Xeon X7560 162,601 1.38x HP ProLiant DL980 G780 cores Intel Xeon E7-4870    224,348* * Keine echten Resultate  - spekulative Werte auf der Grundlage von rPerf (Power7+) oder SPECint_rate2006 (HP) Natuerlich sind IBM oder HP herzlich eingeladen, diese Werte zu widerlegen.  Aber stand heute warte ich noch auf aktuelle Benchmark Veroffentlichungen in diesem Datensegment. Was koennen wir also zusammenfassen? Es gibt einige Hinweise, dass der Plattenspeicher der begrenzende Faktor war, der die SPARC T5-4 daran hinderte, auf jenseits von 2x zu skalieren Der Mythos, dass SPARC Kerne keine Leistung bringen, ist genau das - ein Mythos.  Wie sieht es umgekehrt eigentlich mit einem TPC-H Ergebnis fuer die Power7+ aus? Cache ist nicht der magische Performance-Schalter, fuer den ihn manche Leute offenbar halten. Ein System, eine CPU-Architektur und ein Betriebsystem jenseits einer gewissen Grenze zu skalieren ist schwer.  In der x86-Welt scheint es noch ein wenig schwerer zu sein. Was fehlt?  Nun, das Thema Preis/Leistung ueberlasse ich gerne den Verkaeufern ;-) Und zu guter Letzt: Nein, ich habe mich nicht ins Marketing versetzen lassen.  Aber manchmal kann ich mich einfach nicht zurueckhalten... Disclosure Statements The views expressed on this blog are my own and do not necessarily reflect the views of Oracle. TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 409,721.8 QphH@3000GB, $3.94/QphH@3000GB, available 9/24/13, 4 processors, 64 cores, 512 threads; SPARC T4-4 205,792.0 QphH@3000GB, $4.10/QphH@3000GB, available 5/31/12, 4 processors, 32 cores, 256 threads; IBM Power 780 QphH@3000GB, 192,001.1 QphH@3000GB, $6.37/QphH@3000GB, available 11/30/11, 8 processors, 32 cores, 128 threads; HP ProLiant DL980 G7 162,601.7 QphH@3000GB, $2.68/QphH@3000GB available 10/13/10, 8 processors, 64 cores, 128 threads. SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 18, 2013 from www.spec.org. HP ProLiant DL980 G7 (2.27 GHz, Intel Xeon X7560): 1580 SPECint_rate2006; HP ProLiant DL980 G7 (2.4 GHz, Intel Xeon E7-4870): 2180 SPECint_rate2006,

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • Secure Deployment of Oracle VM Server for SPARC - updated

    - by Stefan Hinker
    Quite a while ago, I published a paper with recommendations for a secure deployment of LDoms.  Many things happend in the mean time, and an update to that paper was due.  Besides some minor spelling corrections, many obsolete or changed links were updated.  However, the main reason for the update was the introduction of a second usage model for LDoms.  In a very short few words: With the success especially of the T4-4, many deployments make use of the hardware partitioning capabilities of that platform, assigning full PCIe root complexes to domains, mimicking dynamic system domains if you will.  This different way of using the hypervisor needed to be addressed in the paper.  You can find the updated version here: Secure Deployment of Oracle VM Server for SPARCSecond Edition I hope it'll be useful!

    Read the article

  • file acess slow after deletion of many files

    - by stefan
    I recently accidentally created millions of files in one folder (rougly 5 million) and due to limitations I couldn't process them correctly (maximum argument count exceeded for wc / ls and such stuff). So I deleted them, which took quite a while, but now they're gone. I deleted the files with a regular rm. It weren't any system files. So the files are definitively deleted, but the system is very slow on file stuff now. ls, cat and auto-complete by pressing tab freezes the terminal for several seconds. Is this some sort of fragmentation issue? Is it an issue with the files beeing still somehow present?

    Read the article

  • Bluetooth headset A2DP works, HSP/HFP not (no sound/no mic)

    - by Stefan Armbruster
    My Philips SBH9001 headset pairs fine using Ubuntu 12.04. In the audio settings it's properly detected as A2DP device and as HSP/HFP device. Hardware: Thinkpad X230, Ubuntu 12.04 64bit, Kernel 3.6.0-030600rc3-generic (build from Ubuntu mainline repo), Bluetooth device is USB-Id 0a5c:21e6 from Broadcom, Headset is a Philips SBH9001. Note: Kernel 3.6 rc3 is used because of a fix for audio on the dockingstation that is not in any previous branches. Playing audio in A2DP works just fine out of the box, but when switching the headset to HSP/HSP mode there is no sound nor does the microphone work. When connecting the headset, /var/log/syslog shows: Aug 25 21:32:47 x230 bluetoothd[735]: Badly formated or unrecognized command: AT+CSRSF=1,1,1,1,1,7 Aug 25 21:32:49 x230 rtkit-daemon[1879]: Successfully made thread 17091 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:32:49 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:32:50 x230 kernel: [ 4860.627585] input: 00:1E:7C:01:73:E1 as /devices/virtual/input/input17 When switching from A2DP (standard profile) to HSP/HFP: Aug 25 21:34:36 x230 bluetoothd[735]: /org/bluez/735/hci0/dev_00_1E_7C_01_73_E1/fd3: fd(34) ready Aug 25 21:34:36 x230 rtkit-daemon[1879]: Successfully made thread 17309 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:34:36 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:34:41 x230 bluetoothd[735]: Audio connection got disconnected Any hints how to get HSP/HFP working here?

    Read the article

  • What is the best drink to drink when you have read nonsense questions on programmers?

    - by stefan
    I am having a hard time deciding what drink to drink after I have read a nonsense question on programmers.stackexchange. It's either Beer och Whisky; The beer is nice since you can down it some what relaxed but some times I feel the need for something "stronger" because the question is so utterly nonsense and stupid. Every time I have read a stupid / nonsense question on programmers.stackexchange.com I've questioned myself why I didnt write some code instead. I couldve probably written countless lines of codes, together probably building a new Facebook or Linux by now. But instead I sacrificed my precious time reading questions that shouldn't have been posted on the internet. It really makes me frustrating, I guess that is why I am so often considering the whisky part instead of beer. Since beer will maybe not calm me down enough and then I have to take the whisky too, together it's a) slightly more expensive and b) more time consuming. So, what is the best drink?

    Read the article

  • Looking for a simple web interface with subversion support and ticket /issue tracker [closed]

    - by Stefan Andre Brannfjell
    I am working on a small project and we have a few programmers on the job. We are using subversion to commit updates and keep all developers up to date on their workstations. However, we have yet to find a suitable web interface to use for it. I have tried redmine, but that installation progress was extremely bothersome and advanced. Once I got it to work I found out that it was slow and did not meet my expectations. As well as it seems a bit complex for our needs. I would prefer to find a solution that supports lighttpd web server, however that seem to be very hard to come by, those I have found seem to only have apache support. Functionality i wish for the website: - login to an svn account - view svn logs - view & create issues, todo list etc - view svn difference Do you have any open source recommendations that I can try out? I will appreciate any kind of reply. :) Edit: I wish to host the website on our own servers.

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • introducing pointers to a large software project

    - by stefan
    I have a fairly large software project written in c++. In there, there is a class foo which represents a structure (by which i don't mean the programmers struct) in which foo-objects can be part of a foo-object. Here's class foo in simplest form: class Foo { private: std::vector<unsigned int> indices; public: void addFooIndex(unsigned int); unsigned int getFooIndex(unsigned int); }; Every foo-object is currently stored in an object of class bar. class Bar { private: std::vector<Foo> foos; public: void addFoo(Foo); std::vector<Foo> getFoos(); } So if a foo-object should represent a structure with a "inner" foo-object, I currently do Foo foo; Foo innerFoo; foo.addFooIndex(bar.getFoos().size() - 1); bar.addFoo(innerFoo); And to get it, I obviously use: Foo foo; for ( unsigned int i = 0; i < foo.getFooIndices().size(); ++i ) { Foo inner_foo; assert( foo.getFooIndices().at(i) < bar.getFoos().size() ); inner_foo = bar.getFoos().at(foo.getFooIndices().at(i)); } So this is not a problem. It just works. But it's not the most elegant solution. I now want to make the inner foos to be "more connected" with the foo-object. It would be obviously to change class foo to: class Foo { private: std::vector<Foo*> foo_pointers; public: void addFooPointer(Foo*); std::vector<Foo*> getFooPointers(); }; So now, for my question: How to gently change this basic class without messing up the whole code? Is there a "clean way"?

    Read the article

  • "mcp power or thermal limit exceeded" excessive in messages.log

    - by Stefan Thyberg
    I keep getting "mcp power or thermal limit exceeded" every five seconds while my computer is on, regardless of what it's doing. I googled it and found some Intel patches but I don't really know exactly what they do or how to apply them and I also don't want to patch my kernel with these random bits of code from some newsgroup. Can anyone shed some light on what exactly is going on and what the right fix is in this case? Am I better off just waiting for a kernel patch?

    Read the article

  • Choppy/unresponsive keyboard and mouse pointer until suspend

    - by Stefan Thyberg
    I had a problem where my mouse and keyboard would be choppy, the mouse pointer would work at 1-2 fps and the keyboard would keep missing letters as I was typing them. Since they were both USB I suspected there was a problem there immediately. Whenever I got this problem I would suspend the computer and start it again and the problem would be gone. The problem started appearing when I plugged the mouse and keyboard directly into the computer rather than the USB hub in the screen. I'm using a Logitech UltraX Flat and a Razer Lachesis, but I'm not sure if that matters.

    Read the article

  • What's up with LDoms: Eine Artikel-Reihe zum Thema Oracle VM Server for SPARC

    - by Stefan Hinker
    Unter dem Titel "What's up with LDoms" habe ich soeben den ersten Artikel einer ganzen Reihe veroeffentlicht.  Ziel der Artikelreihe ist es, das vollstaendige Feature-Set der LDoms zu betrachten.  Da das ganze recht umfangreich ist, werde ich hier von der Zweisprachigkeit abweichen und die Artikel ausschliesslich auf Englisch verfassen.  Ich bitte hierfuer um Verstaendnis. Den ersten Artikel gibt es hier: What's up with LDoms: Part 1 - Introduction & Basic Concepts Viel Spass beim Lesen!

    Read the article

  • Portal And Content – Introduction (1 of 7)

    - by Stefan Krantz
    The coming post over the next two months will be included in a new series. The idea is to help the reader to understand how to enable a versatile and manageable portal. Each post will go through a specific use case or lifecycle group of events that a Content Driven Portal requires the development team to consider. The current planning is to deliver following subjects, each topic will be enclosed in a separate blog post. Introduction – Introduction to the series of posts and what to expect at the end of the series Components, part 1 – UCM, Site Studio and high level introduction to content templates Components, part 2 – Page Templates and  Navigation model Components, part 3 – Applied Customization Framework for Content Presenter Taskflows Scenario 1 – Enable a Portal for runtime administration Scenario 2 – Enable a Portal for Internationalization Scenario 3 – Enable a Portal for Content Workflows Background This post series has been issued to help customers, partners and consultants to understand the concept of a WebCenter Portal project where the main focus or a majority of the portal has content interaction. Today the most portal installations Oracle WebCenter Portal is involved in have a vast majority of content based pages. Many of the Portal projects have or will run into challenges, to mitigate these challenges the portal and content lifecycle has to be well designed. The coming posts will address the main components that should be involved when creating such scenarios; it will also go into details on the process by describing three solution scenarios. The aim with the scenarios is to give the reader a more hands on understanding of the concept of building and architecting a Content Driven Portal. The selected scenarios are selected based on the most common use cases that we have identified until today.

    Read the article

  • Upgrade from 10.10 to 11:04 to 11.10. X starts, Unity desktop not there

    - by Stefan Lasiewski
    I upgraded my Ubuntu Desktop 10.10 system to 11.04 and then 11.10. Now, when I start Ubuntu, the Unity desktop doesn't start correctly. Here's what I do: Start computer I log in via the GUI login screen The X Window System starts. I can see an arrow icon for the mouse, and can move it around the screen. Nothing else starts. There are no desktop icons, no launcher, no taskbar, etc. Right clicking on desktop does nothing. I've tried some common keyboard shortcuts ("Alt-tab" "Ctrl-Alt-T" "Ctrl-Alt-Backspace") but nothing happens. If I go to another Virtual Console, and run the command unity, the Unity desktop will start on my primary X desktop. However I'm seeing many problems like windows which are only half-drawn, some applications run without the familiar "Minimize, Maximize, Close" What's happening here? It seems that the X Window System started, but Unity did not start? How can I debug this?

    Read the article

  • blender: 3D model from guide images

    - by Stefan
    In a effort to learn the blender interface, which is confusing to say the least, I've chosen to model a model from referrence pictures easily found on the web. Problem is that I can't ( and won't ) get perfect "right", "front" and "top" pictures. Blender only allows you to see the background pictures when in ortographic mode and only from right|front|top, which doesn't help me. How to I proceed to model from non-perfect guide images?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >