Search Results

Search found 4110 results on 165 pages for 'arnauld vm'.

Page 85/165 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Error When Loading Images on Local Host Test Server

    - by ke4ktz
    I have a peculiar problem that I just can't seem to find an explanation. I'm working on an AngularJS site for our family and am integrating data from various web services. Currently I am working on the photos section which will integrate in photos from our Flickr account. I have a main page which lists the various photo sets and displays the set's primary photo along with the title. (Note: I'm using the Flickr 'extras' parameter to return the primary photo's URL in the API calls.) <div data-ng-repeat="p in vm.photoSets"> <a ng-href="#/photos/{{p.id}}"> <img ng-src="{{p.primary_photo_extras.url_s}}"></img> </a> <h4>{{p.title._content}}</h4> </div> When clicking on the photo, the routing will display a page with a list of all the photos from that set, showing the image and the title. <div data-ng-repeat="p in vm.photoSetData.photo"> <a ng-href="#/photos/{{vm.photoSetId}}/{{p.id}}" <img ng-src="{{p.url_s}}"></img> </a> <h4>{{p.title}}</h4> </div> Now, here's where the problem is occuring. When I upload the code to my public website on my hosting provider, everything works just fine. Both pages display their respective photos. However, when I attempt to run the site on my local system, either in MAMP or NodeJS (using http-server), the second page gives me an error for each image: Error: [$interpolate:interr] Can't interpolate: {{p.url_s}} Error: [$sce:insecurl] Blocked loading resource from url not allowed by $sceDelegate policy. URL: https://farm1.staticflickr.com/37/82749767_e82ff60ce3_m.jpg http://errors.angularjs.org/1.2.9/$sce/insecurl?p0=https%3A%2F%2Ffarm1.staticflickr.com%2F37%2F82749767_e82ff60ce3_m.jpg http://errors.angularjs.org/1.2.9/$interpolate/interr?p0=%7B%7Bp.url_s%7D%7D&p1=Error%3A%20%5B%24sce%3Ainsecurl%5D%20Blocked%20loading%20resource%20from%20url%20not%20allowed%20by%20%24sceDelegate%20policy.%20%20URL%3A%20https%3A%2F%2Ffarm1.staticflickr.com%2F37%2F82749767_e82ff60ce3_m.jpg%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.2.9%2F%24sce%2Finsecurl%3Fp0%3Dhttps%253A%252F%252Ffarm1.staticflickr.com%252F37%252F82749767_e82ff60ce3_m.jpg minErr/<@http://localhost/scripts/angular.js:78 $interpolate/fn@http://localhost/scripts/angular.js:8254 $RootScopeProvider/this.$get</Scope.prototype.$digest@http://localhost/scripts/angular.js:11800 $RootScopeProvider/this.$get</Scope.prototype.$apply@http://localhost/scripts/angular.js:12061 done@http://localhost/scripts/angular.js:7843 completeRequest@http://localhost/scripts/angular.js:8026 createHttpBackend/</jsonpDone<@http://localhost/scripts/angular.js:7942 jsonpReq/doneWrapper@http://localhost/scripts/angular.js:8039 jsonpReq/script.onerror@http://localhost/scripts/angular.js:8053 The API call to Flickr is successful and returns the correct data. In fact, the image title does display! I've tested it with Firefox, Safari and Chrome...all three browsers fail. I cannot find any explanation as to why it would work remotely but fail locally. Also, the images show up on the first page, but not on the second, even though one of the images on the second page is the same image URL as on the first page. Even going directly to the second page, bypassing the first page, still fails. Any ideas on how to fix this? It would be nice to test locally without having to upload to the server each time I make a change. Update: I have shut off the $sce security to see if that was causing the issue. Although it resulted in turning the error off, the files still don't load on the local test server. I have used the developer tools' network monitor and it doesn't even show an attempt to retrieve the files. AngularJS appears to shut down the retrieval, although the correct path shows up in the DOM.

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • How I Work: A Cloud Developer's Workstation

    - by BuckWoody
    I've written here a little about how I work during the day, including things like using a stand-up desk (still doing that, by the way). Inspired by a Twitter conversation yesterday, I thought I might explain how I set up my computing environment. First, a couple of important points. I work in Cloud Computing, specifically (but not limited to) Windows Azure. Windows Azure has features to run a Virtual Machine (IaaS), run code without having to control a Virtual Machine (PaaS) and use databases, video streaming, Hadoop and more (a kind of SaaS for tech pros). As such, my designs run the gamut of on-premises, VM's in the Cloud, and software that I write for a platform. I focus on data primarily, meaning that I design a lot of systems that use an RDBMS (like SQL Server or Windows Azure Databases) or a NoSQL approach (MongoDB on Azure or large-scale Key-Value Pairs in Table storage) and even Hadoop and R, and also Cloud Numerics in F#. All that being said, those things inform my choices below. Hardware I have a Lenovo X220 tablet/laptop which I really like a great deal - it's a light, tough, extremely fast system. When I travel, that's the system I take. It has 8GB of RAM, and an SSD drive. I sometimes use that to develop or work at a client's site, on the road, or in the living room when I'm not in my home office. My main system is a GateWay DX430017 - I've maxed it out on RAM, and I have two 1TB drives in it. It's not only my workstation for work; I leave it on all the time and it streams our videos, music and books. I have about 3400 e-books, and I've just started using Calibre to stream the library. I run Windows 8 on it so I can set up Hyper-V images, since Windows Azure allows me to move regular Hyper-V disks back and forth to the Cloud. That's where all my "servers" are, when I have to use an IaaS approach. The reason I use a desktop-style system rather than a laptop only approach is that a good part of my job is setting up architectures to solve really big, complex problems. That means I have to simulate entire networks on-premises, along with the Hybrid Cloud approach I use a lot. I need a lot of disk space and memory for that, and I use two huge monitors on my stand-up desk. I could probably use 10 monitors if I had the room for them. Also, since it's our home system as well, I leave it on all the time and it doesn't travel.   Software For the software for my systems, it's important to keep in mind that I not only write code, but I design databases, teach, present, and create Linux and other environments. Windows 8 - While the jury is out for me on the new interface, the context-sensitive search, integrated everything, and speed is just hands-down the right choice. I've evaluated a server OS, Linux, even an Apple, but I just am not as efficient on those as I am with Windows 8. Visual Studio Ultimate - I develop primarily in .NET (C# and F# mostly) and I use the Team Foundation Server in the cloud, and I'm asked to do everything from UI to Services, so I need everything. Windows Azure SDK, Windows Azure Training Kit - I need the first to set up my Azure PaaS coding, and the second has all the info I need for PaaS, IaaS and SaaS. This is primarily how I get paid. :) SQL Server Developer Edition - While I might install Oracle, MySQL and Postgres on my VM's, the "outside" environment is SQL Server for an RDBMS. I install the Developer Edition because it has the same features as Enterprise Edition, and comes with all the client tools and documentation. Microsoft Office -  Even if I didn't work here, this is what I would use. I've just grown too accustomed to doing business this way to change, so my advice is always "use what works", and this does. The parts I use are: OneNote (and a Math Add-in) - I do almost everything - and I mean everything in OneNote. I can code, do high-end math, present, design, collaborate and more. All my notebooks are on my Skydrive. I can use them from any system, anywhere. If you take the time to learn this program, you'll be hooked. Excel with PowerPivot - Don't make that face. Excel is the world's database, and every Data Scientist I know - even the ones where I teach at the University of Washington - know it, use it, and love it.  Outlook - Primary communications, CRM and contact tool. I have all of my social media hooked up to it, so when I get an e-mail from you, I see everything, see all the history we've had on e-mail, find you on a map and more. Lync - I was fine with LiveMeeting, although it has it's moments. For me, the Lync client is tres-awesome. I use this throughout my day, present on it, stay in contact with colleagues and the folks on the dev team (who wish I didn't have it) and more.  PowerPoint - Once again, don't make that face. Whenever I see someone complaining about PowerPoint, I have 100% of the time found they don't know how to use it. If you suck at presenting or creating content, don't blame PowerPoint. Works great on my machine. :) Zoomit - Magnifier - On Windows 7 (and 8 as well) there's a built-in magnifier, but I install Zoomit out of habit. It enlarges the screen. If you don't use one of these tools (or their equivalent on some other OS) then you're presenting/teaching wrong, and you should stop presenting/teaching until you get them and learn how to show people what you can see on your tiny, tiny monitor. :) Cygwin - Unix for Windows. OK, that's not true, but it's mostly that. I grew up on mainframes and Unix (IBM and HP, thank you) and I can't imagine life without  sed, awk, grep, vim, and bash. I also tend to take a lot of the "Science" and "Development" and "Database" packages in it as well. PuTTY - Speaking of Unix, when I need to connect to my Linux VM's in Windows Azure, I want to do it securely. This is the tool for that. Notepad++ - Somewhere between torturing myself in vim and luxuriating in OneNote is Notepad++. Everyone has a favorite text editor; this one is mine. Too many features to name, and it's free. Browsers - I install Chrome, Firefox and of course IE. I know it's in vogue to rant on IE, but I tend to think for myself a great deal, and I've had few (none) problems with it. The others I have for the haterz that make sites that won't run in IE. Visio - I've used a lot of design packages, but none have the extreme meta-data edit capabilities of Visio. I don't use this all the time - it can be rather heavy, but what it does it does really well. I also present this way when I'm not using PowerPoint. Yup, I just bring up Visio and diagram away as I'm chatting with clients. Depending on what we're covering, this can be the right tool for that. Tweetdeck - The AIR one, not that new disaster they came out with. I live on social media, since you, dear readers, are my cube-mates. When I get tired of you all, I close Tweetdeck. When I need help or someone needs help from me, or if I want to see a picture of a cat while I'm coding, I bring it up. It's up most all day and night. Windows Media Player - I listen to Trance or Classical when I code, and I find music managers overbearing and extra. I just use what comes in the box, and it works great for me. R - F# and Cloud Numerics now allows me to load in R libraries (yay!) and I use this for statistical work on big data loads. Microsoft Math - One of the most amazing, free, rich, amazing, awesome, amazing calculators out there. I get the 64-bit version for quick math conversions, plots and formula-checks. Python - I know, right? Who knew that the scientific community loved Python so much. But they do. I use 2.7; not as much runs with 3+. I also use IronPython in Visual Studio, or I edit in Notepad++ Camstudio recorder - Windows PSR - In much of my training, and all of my teaching at the UW, I need to show a process on a screen. Camstudio records screen and voice, and it's free. If I need to make static training, I use the Windows PSR tool that's built right in. It's ostensibly for problem duplication, but I use it to record for training.   OK - your turn. Post a link to your blog entry below, and tell me how you set your system up.  

    Read the article

  • Blogging: MacJournal & Windows Live Writer

    - by Jeff Julian
    One thing I have learned about using a Mac is that Apple does not produce very many free applications. The ones they do are typically not full featured and to get the full feature you need to buy their upgraded version. For example, when it comes to Photo editing and cataloging, iPhoto is not a solution for large files or RAW processing, you need Aperture which is a couple hundred dollars. I am not complaining because I like it when an application has a product team who generates revenue with it, because the chance of them being around longer seems to be higher. What is my point in all of this? Apple does not produce a product for blogging/journaling like Microsoft does with Windows Live Writer. I love Windows Live Writer. If you are on a Windows box, it is a required tool in your toolbox if you publish to a blog. The cleanness of the interface, integration with most blog APIs and ability to Save Local or Publish as a Draft make capturing your thoughts for publishing now or later a very easy task. My hope is that Microsoft will port it to the Mac, but I don’t believe that will ever happen as it is not a revenue generating product and Microsoft doesn’t often port to a Mac besides Remote Desktop Connection and MSN Messenger. For my configuration I used to use only Boot Camp on my two MacBook Pros I have owned in the past three years because I’m a PC, but after four different rebuilds (not typically due to Windows, but Boot Camp or Parallels) I decided to move off the Boot Camp platform and to VMWare Fusion. This is a complete separate blog post that I should spec out in MacJournal, but I now always boot into the Mac OS and use Fusion for my AJI Software VM or my client’s VMs. It just seems to work better for me and I have a very nice way to backup my Windows environments with VMWare.Needless to say, there was need in my new laptop configuration for a blogging tool that worked natively on a Mac. I don’t like to power up my machine for writing a document or working on an image and need to boot up a VM just so I can use Windows. Some would say why not just use a Windows laptop and put the MBP on eBay? It is just a preference and right now, I like the Mac OS for day to day work. So in comes MacJournal, part of the current MacHeist package for $19.95 (MacJournal is normally $39.95). This product is definitely not WLW, but WLW is missing some features I like in MacJournal. I hope the price point comes down on MacJournal cause I could see paying $19.95 for it, but it is always hard for me to buy a piece of software for $39.95 when I can use something else. But I am a cheapskate when it comes to software packages. I suggest if you are using a Mac to drop what you are doing pick up the MacHeist bundle today before it is over, but if you are reading this later, than download the trial and see if MacJournal is a solution for you. If you have any other suggestions that are as nice or cheaper, please comment.Product LinksMacJournal by Mariners Software $39.95 (part of MacHeist bundle for $19.95 with only one day left)Windows Live Writer by MicrosoftThis post was created using MacJournal.[Update: The joys of formatting. Make sure if you are a Geekswithblogs.net member that you use this configuration to setup the Metablog formatting of paragraphs correctly]

    Read the article

  • Systems Solutions at COLLABORATE12

    - by ferhat
    Want to connect with fellow Oracle users and learn more about how to maximize your Oracle software environments with Oracle Systems?   Pack your bags for Las Vegas!   COLLABORATE 12  is right around the corner! COLLABORATE 12 Conference will be held at the Mandalay Bay in Las Vegas, NV 22-26 April, 2012. This is an event designed and delivered by users just like you with sessions, interactive panel discussions and hands-on learning opportunities packed with first-hand experiences, case studies and practical “how-to” content.. This year’s event includes a number of educational sessions and demos for users interested in learning from the experts how to use Oracle Optimized Solutions to get the most out of their Oracle Technology and Application software. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance.  And because they are highly flexible by design,  Oracle Optimized Solutions   can be implemented as an end-to-end solution or easily adapted into existing environments. Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions. Please come by our Exhibition Booth #1273 to see the demos and meet 1-1 with the experts behind a number of  Oracle Optimized Solutions  including those for JD Edwards EnterpriseOne, E-Business Suite, PeopleSoft HCM, Oracle WebCenter, and Oracle Database.  Exhibitor Showcase Booth #1273 DAY TIME TITLE Monday  April 23 6:00 pm - 8:00 pm Welcome Reception in the Exhibitor Showcase Tuesday  April 24 10:15 am - 4:00 pm Exhibitor Showcase Open 1:00 pm - 2:00 pm Dedicated Exhibitor Showcase Time 5:30 pm - 7:00 pm Exhibitor Showcase Happy Hour Wednesday  April 25 10:30 am - 3:00 pm Exhibitor Showcase Open 2:15 pm -3:00 pm Afternoon Break in Exhibitor Showcase  There are also a number of deep dive, educational sessions covering deployment best practices using Oracle’s engineered systems and best-in-class hardware, operating system and virtualization technologies.  Education Sessions DAY TIME TITLE LOCATION Monday  April 23 9:45 am - 10:45 am Architecting and Implementing Backup and Recovery Solutions Surf E Tuesday  April 24 2:00 pm – 3:00 pm Oracle's High Performance Systems for JD Edwards EnterpriseOne Mandalay Bay GH 4:30 pm - 5:30 pm Virtualization Boot Camp: What's New with Oracle VM Server for x86 Mandalay Bay C 9:30 am - 10:30 am Oracle on Oracle VM - Expert Panel Mandalay Bay L Wednesday  April 25 9:30 am - 10:30 am Cloud Computing Directions: Part II Understanding Oracle's Cloud Directions South Seas E  And don’t forget the keynotes and software roadmap sessions! Keynotes and Roadmap Sessions DAY TIME TITLE LOCATION Sunday  April 22 3:20 pm – 4:20 pm Oracle’s Cloud Computing Strategy Breakers B Monday  April 23 11:00 am – 12:00 pm JD Edwards - Vision, Promises and Execution: IT'S THE WAY WE ROLL and Why it Matters! Mandalay Bay A 11:00 am – 12:00 pm PeopleSoft Executive Update and Roadmap Mandalay Bay J 1:15 pm - 2:15 pm Oracle Database - Engineered for Innovation Mandalay Bay L 2:30 pm - 3:30 pm Oracle E-Business Suite Applications Strategy and General Manager Update Mandalay Bay D Tuesday  April 24 9:15 am - 10:15 am IT at Oracle: The Art of IT Transformation to Enable Business Growth Mandalay Bay Ballroom H

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

  • "domain crashed" when creating new Xen instance

    - by user47650
    I have downloaded a Xen virtual machine image from the appscale project, and I am trying to start it up. However once I run the command; xm create -c -f xen.conf The instance immediately crashes and provides no console output. however it produces logs that I have posted below. but this is the error; [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. I have managed to mount the root.img file locally and verify that it is actually an ext3 file system. I am running Xen 3.0.3 that is a stock RPM from the CentOS 5 repos; # rpm -qa | grep -i xen xen-libs-3.0.3-105.el5_5.5 xen-3.0.3-105.el5_5.5 xen-libs-3.0.3-105.el5_5.5 kernel-xen-2.6.18-194.32.1.el5 any suggestions on how to proceed with troubleshooting? (i am a newbie to Xen) so far I have enabled console logging, but the log file is empty. ==> domain-builder-ng.log <== xc_dom_allocate: cmdline=" ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console", features="" xc_dom_kernel_file: filename="/boot/vmlinuz-2.6.27-7-server" xc_dom_malloc_filemap : 2284 kB xc_dom_ramdisk_file: filename="/boot/initrd.img-2.6.27-7-server" xc_dom_malloc_filemap : 9005 kB xc_dom_boot_xen_init: ver 3.1, caps xen-3.0-x86_64 xen-3.0-x86_32p xc_dom_parse_image: called xc_dom_find_loader: trying ELF-generic loader ... failed xc_dom_find_loader: trying Linux bzImage loader ... xc_dom_malloc : 9875 kB xc_dom_do_gunzip: unzip ok, 0x234bb2 -> 0x9a4de0 OK elf_parse_binary: phdr: paddr=0x200000 memsz=0x447000 elf_parse_binary: phdr: paddr=0x647000 memsz=0xab888 elf_parse_binary: phdr: paddr=0x6f3000 memsz=0x908 elf_parse_binary: phdr: paddr=0x6f4000 memsz=0x1c2f9c elf_parse_binary: memory: 0x200000 -> 0x8b6f9c elf_xen_parse_note: GUEST_OS = "linux" elf_xen_parse_note: GUEST_VERSION = "2.6" elf_xen_parse_note: XEN_VERSION = "xen-3.0" elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000 elf_xen_parse_note: ENTRY = 0xffffffff8071e200 elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff80209000 elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb" elf_xen_parse_note: PAE_MODE = "yes" elf_xen_parse_note: LOADER = "generic" elf_xen_parse_note: unknown xen elf note (0xd) elf_xen_parse_note: SUSPEND_CANCEL = 0x1 elf_xen_parse_note: HV_START_LOW = 0xffff800000000000 elf_xen_parse_note: PADDR_OFFSET = 0x0 elf_xen_addr_calc_check: addresses: virt_base = 0xffffffff80000000 elf_paddr_offset = 0x0 virt_offset = 0xffffffff80000000 virt_kstart = 0xffffffff80200000 virt_kend = 0xffffffff808b6f9c virt_entry = 0xffffffff8071e200 xc_dom_parse_elf_kernel: xen-3.0-x86_64: 0xffffffff80200000 -> 0xffffffff808b6f9c xc_dom_mem_init: mem 1024 MB, pages 0x40000 pages, 4k each xc_dom_mem_init: 0x40000 pages xc_dom_boot_mem_init: called x86_compat: guest xen-3.0-x86_64, address size 64 xc_dom_malloc : 2048 kB ==> xend.log <== [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:01 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] INFO (XendDomainInfo:957) Dev 0 still active, looping... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 9 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:207) XendDomainInfo.create(['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]]) [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:329) parseConfig: config is ['domain', ['domid', 9], ['uuid', 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0'], ['vcpus', 1], ['vcpu_avail', 1], ['cpu_cap', 0], ['cpu_weight', 256], ['memory', 1024], ['shadow_memory', 0], ['maxmem', 1024], ['features', ''], ['name', 'appscale-1.4b'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']]], ['cpus', []], ['device', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]], ['device', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']]], ['state', '----c-'], ['shutdown_reason', 'crash'], ['cpu_time', 0.000339131], ['online_vcpus', 1], ['up_time', '0.952092885971'], ['start_time', '1299011639.92'], ['store_mfn', 1169289], ['console_mfn', 1169288]] [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:446) parseConfig: result is {'features': '', 'image': ['linux', ['kernel', '/boot/vmlinuz-2.6.27-7-server'], ['ramdisk', '/boot/initrd.img-2.6.27-7-server'], ['ip', ':1.2.3.4::::eth0:dhcp'], ['root', '/dev/sda1 ro'], ['args', 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console']], 'cpus': [], 'vcpu_avail': 1, 'backend': [], 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'cpu_weight': 256.0, 'memory': 1024, 'cpu_cap': 0, 'localtime': None, 'timer_mode': None, 'start_time': 1299011639.9200001, 'on_poweroff': 'destroy', 'on_crash': 'restart', 'device': [('vif', ['vif', ['backend', 0], ['script', 'vif-bridge'], ['mac', '00:16:3B:72:10:E4']]), ('vbd', ['vbd', ['backend', 0], ['dev', 'sda1:disk'], ['uname', 'file:/local/xen/domains/appscale1.4/root.img'], ['mode', 'w']])], 'bootloader': None, 'maxmem': 1024, 'shadow_memory': 0, 'name': 'appscale-1.4b', 'bootloader_args': None, 'vcpus': 1, 'cpu': None} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1784) XendDomainInfo.construct: None [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034420 KiB free; need 4096; done. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1953) XendDomainInfo.initDomain: 10 256.0 [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1994) _initDomain:shadow_memory=0x0, maxmem=0x400, memory=0x400. [2011-03-01 12:34:02 xend 3580] DEBUG (balloon:145) Balloon: 3034412 KiB free; need 1048576; done. [2011-03-01 12:34:02 xend 3580] INFO (image:139) buildDomain os=linux dom=10 vcpus=1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:208) domid = 10 [2011-03-01 12:34:02 xend 3580] DEBUG (image:209) memsize = 1024 [2011-03-01 12:34:02 xend 3580] DEBUG (image:210) image = /boot/vmlinuz-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:211) store_evtchn = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:212) console_evtchn = 2 [2011-03-01 12:34:02 xend 3580] DEBUG (image:213) cmdline = ip=:1.2.3.4::::eth0:dhcp root=/dev/sda1 ro xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console [2011-03-01 12:34:02 xend 3580] DEBUG (image:214) ramdisk = /boot/initrd.img-2.6.27-7-server [2011-03-01 12:34:02 xend 3580] DEBUG (image:215) vcpus = 1 [2011-03-01 12:34:02 xend 3580] DEBUG (image:216) features = ==> domain-builder-ng.log <== xc_dom_build_image: called xc_dom_alloc_segment: kernel : 0xffffffff80200000 -> 0xffffffff808b7000 (pfn 0x200 + 0x6b7 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x200+0x6b7 at 0x2aaaab5f6000 elf_load_binary: phdr 0 at 0x0x2aaaab5f6000 -> 0x0x2aaaaba3d000 elf_load_binary: phdr 1 at 0x0x2aaaaba3d000 -> 0x0x2aaaabae8888 elf_load_binary: phdr 2 at 0x0x2aaaabae9000 -> 0x0x2aaaabae9908 elf_load_binary: phdr 3 at 0x0x2aaaabaea000 -> 0x0x2aaaabb9a004 xc_dom_alloc_segment: ramdisk : 0xffffffff808b7000 -> 0xffffffff82382000 (pfn 0x8b7 + 0x1acb pages) xc_dom_malloc : 160 kB xc_dom_pfn_to_ptr: domU mapping: pfn 0x8b7+0x1acb at 0x2aaab0000000 xc_dom_do_gunzip: unzip ok, 0x8cb5e7 -> 0x1aca210 xc_dom_alloc_segment: phys2mach : 0xffffffff82382000 -> 0xffffffff82582000 (pfn 0x2382 + 0x200 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2382+0x200 at 0x2aaab1acb000 xc_dom_alloc_page : start info : 0xffffffff82582000 (pfn 0x2582) xc_dom_alloc_page : xenstore : 0xffffffff82583000 (pfn 0x2583) xc_dom_alloc_page : console : 0xffffffff82584000 (pfn 0x2584) nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s) nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s) nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffff827fffff, 20 table(s) xc_dom_alloc_segment: page tables : 0xffffffff82585000 -> 0xffffffff8259c000 (pfn 0x2585 + 0x17 pages) xc_dom_pfn_to_ptr: domU mapping: pfn 0x2585+0x17 at 0x2aaab1ccb000 xc_dom_alloc_page : boot stack : 0xffffffff8259c000 (pfn 0x259c) xc_dom_build_image : virt_alloc_end : 0xffffffff8259d000 xc_dom_build_image : virt_pgtab_end : 0xffffffff82800000 xc_dom_boot_image: called arch_setup_bootearly: doing nothing xc_dom_compat_check: supported guest type: xen-3.0-x86_64 <= matches xc_dom_compat_check: supported guest type: xen-3.0-x86_32p xc_dom_update_guest_p2m: dst 64bit, pages 0x40000 clear_page: pfn 0x2584, mfn 0x11d788 clear_page: pfn 0x2583, mfn 0x11d789 xc_dom_pfn_to_ptr: domU mapping: pfn 0x2582+0x1 at 0x2aaab1ce2000 start_info_x86_64: called setup_hypercall_page: vaddr=0xffffffff80209000 pfn=0x209 domain builder memory footprint allocated malloc : 12139 kB anon mmap : 0 bytes mapped file mmap : 11289 kB domU mmap : 35 MB arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xd6fe1 shared_info_x86_64: called vcpu_x86_64: called vcpu_x86_64: cr3: pfn 0x2585 mfn 0x11d787 launch_vm: called, ctxt=0x97b21f8 xc_dom_release: called ==> xend.log <== [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'mac': '00:16:3B:72:10:E4', 'handle': '0', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vif/10/0'} to /local/domain/10/device/vif/0. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'handle': '0', 'script': '/etc/xen/scripts/vif-bridge', 'state': '1', 'frontend': '/local/domain/10/device/vif/0', 'mac': '00:16:3B:72:10:E4', 'online': '1', 'frontend-id': '10'} to /local/domain/0/backend/vif/10/0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:634) Checking for duplicate for uname: /local/xen/domains/appscale1.4/root.img [file:/local/xen/domains/appscale1.4/root.img], dev: sda1:disk, mode: w [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1:disk: [Errno 2] No such file or directory: '/dev/sda1:disk' [2011-03-01 12:34:02 xend 3580] DEBUG (blkif:27) exception looking up device number for sda1: [Errno 2] No such file or directory: '/dev/sda1' [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:114) DevController: writing {'virtual-device': '2049', 'device-type': 'disk', 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1', 'backend': '/local/domain/0/backend/vbd/10/2049'} to /local/domain/10/device/vbd/2049. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:116) DevController: writing {'domain': 'appscale-1.4b', 'frontend': '/local/domain/10/device/vbd/2049', 'format': 'raw', 'dev': 'sda1', 'state': '1', 'params': '/local/xen/domains/appscale1.4/root.img', 'mode': 'w', 'online': '1', 'frontend-id': '10', 'type': 'file'} to /local/domain/0/backend/vbd/10/2049. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:993) Storing VM details: {'shadow_memory': '0', 'uuid': 'd5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'on_reboot': 'restart', 'start_time': '1299011642.74', 'on_poweroff': 'destroy', 'name': 'appscale-1.4b', 'xend/restart_count': '0', 'vcpus': '1', 'vcpu_avail': '1', 'memory': '1024', 'on_crash': 'restart', 'image': "(linux (kernel /boot/vmlinuz-2.6.27-7-server) (ramdisk /boot/initrd.img-2.6.27-7-server) (ip :1.2.3.4::::eth0:dhcp) (root '/dev/sda1 ro') (args 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'))", 'maxmem': '1024'} [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1028) Storing domain details: {'console/ring-ref': '1169288', 'console/port': '2', 'name': 'appscale-1.4b', 'console/limit': '1048576', 'vm': '/vm/d5f22dd4-8dc2-f51f-84e9-eea7d71ea1d0', 'domid': '10', 'cpu/0/availability': 'online', 'memory/target': '1048576', 'store/ring-ref': '1169289', 'store/port': '1'} [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:158) Waiting for devices vif. [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:164) Waiting for 0. [2011-03-01 12:34:02 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:1250) XendDomainInfo.handleShutdownWatch [2011-03-01 12:34:02 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vif/10/0/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices usb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:164) Waiting for 2049. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:509) hotplugStatusCallback /local/domain/0/backend/vbd/10/2049/hotplug-status. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:523) hotplugStatusCallback 1. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices irq. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vkbd. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vfb. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices pci. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices ioports. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices tap. [2011-03-01 12:34:03 xend 3580] DEBUG (DevController:158) Waiting for devices vtpm. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] WARNING (XendDomainInfo:1178) Domain has crashed: name=appscale-1.4b id=10. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] ERROR (XendDomainInfo:2654) VM appscale-1.4b restarting too fast (2.275545 seconds since the last restart). Refusing to restart to avoid loops. [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2189) XendDomainInfo.destroy: domid=10 ==> xen-hotplug.log <== Nothing to flush. ==> xend.log <== [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] INFO (XendDomainInfo:2330) Dev 2049 still active, looping... [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2114) UUID Created: True [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2115) Devices to release: [], domid = 10 [2011-03-01 12:34:03 xend.XendDomainInfo 3580] DEBUG (XendDomainInfo:2127) Releasing PVFB backend devices ... And this is the xen.conf file that I am using; # cat xen.conf # Configuration file for the Xen instance AppScale, created # bn VMBuilder kernel = '/boot/vmlinuz-2.6.27-7-server' ramdisk = '/boot/initrd.img-2.6.27-7-server' memory = 1024 vcpus = 1 root = '/dev/sda1 ro' disk = [ 'file:/local/xen/domains/appscale1.4/root.img,sda1,w', ] name = 'appscale-1.4b' dhcp = 'dhcp' vif = [ 'mac=00:16:3B:72:10:E4' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' extra = 'xencons=tty console=tty1 console=hvc0 debugger=y debug=y sync_console'

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • Two small issues with Windows Phone 7 ApplicationBar buttons (and workaround)

    - by Laurent Bugnion
    When you work with the ApplicationBar in Windows Phone 7, you notice very fast that it is not quite a component like the others. For example, the ApplicationBarIconButton element is not a dependency object, which causes issues because it is not possible to add attached properties to it. Here are two other issues I stumbled upon, and what workaround I used to make it work anyway. Finding a button by name returns null Since the ApplicationBar is not in the tree of the Silverlight page, finding an element by name fails. For example consider the following code: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/edit.png" Click="EditButtonClick" x:Name="EditButton"/> <shell:ApplicationBarIconButton IconUri="/Resources/cancel.png" Click="CancelButtonClick" x:Name="CancelButton"/> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> with private void EditButtonClick( object sender, EventArgs e) { CancelButton.IsEnabled = false; // Fails, CancelButton is always null } The CancelButton, even though it is named through an x:Name attribute, and even though it appears in Intellisense in the code behind, is null when it is needed. To solve the issue, I use the following code: public enum IconButton { Edit = 0, Cancel = 1 } public ApplicationBarIconButton GetButton( IconButton which) { return ApplicationBar.Buttons[(int) which] as ApplicationBarIconButton; } private void EditButtonClick( object sender, EventArgs e) { GetButton(IconButton.Cancel).IsEnabled = false; } Updating a Binding when the icon button is clicked In Silverlight, a Binding on a TextBox’s Text property can only be updated in two circumstances: When the TextBox loses the focus. Explicitly by placing a call in code. In WPF, there is a third option, updating the Binding every time that the Text property changes (i.e. every time that the user types a character). Unfortunately this option is not available in Silverlight). To select option 1, 2 (and in WPF, 3), you use the Mode property of the Binding class. The issue here is that pressing a button on the ApplicationBar does not remove the focus from the TextBox where the user is currently typing. If the button is a Save button, this is super annoying: The Binding does not get updated on the data object, the object is saved anyway with the old state, and noone understands what just happened. In order to solve this, you can make sure that the Binding is updated explicitly when the button is pressed, with the following code: private void SaveButtonClick(object sender, EventArgs e) { // Force update binding first var binding = MessageTextBox.GetBindingExpression( TextBox.TextProperty); binding.UpdateSource(); // Property was updated for sure, now we can save var vm = DataContext as MainViewModel; vm.Save(); } Obviously this is less maintainable than the usual way to do things in Silverlight. So be careful when using the ApplicationBar and remember that it is not a Silverlight element like the others!! Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • PENGUIN IS GETTING READY FOR ORACLE OPENWORLD 2012

    - by Zeynep Koch
    Are you looking for reasons to attend Oracle Openworld, how about below Oracle Linux sessions and hands-on-labs.  1. General Session: Oracle Linux Strategy and Roadmap  In this session, Oracle executives will discuss Linux strategy; the roadmap; contributions to the Linux mainline kernel; and what's in store for upcoming releases of Oracle Linux and the Unbreakable Enterprise Kernel. Don’t miss this session. 2. New Features in Oracle Linux- A Technical Deep Dive Collaborating with the Linux community, Oracle engineers contribute to advancing Linux for mission-critical deployments. In this technical session, attendees will learn about the recent developments in Oracle Linux and the Unbreakable Enterprise Kernel 3. Why Switch to Oracle Linux?  Oracle is the only company that provides a complete Linux solution from applications to disk, fully optimized for Oracle hardware and software, with one-stop support. In this session you will hear from two customers that have successfully implemented Oracle Linux and saved 50 to 90 percent on Linux support costs as well as the reasons to switch to Oracle Linux. 4. Debugging and Configuration Best Practices for Oracle Linux This is one of our best attended sessions and most informative. In this best practices session, learn how to save time and money while preventing headaches and hassles. Discover expert secrets to get your Linux systems up and running (and keep them running), avoid common pitfalls, prevent problems, and circumvent known issues. 5. Top Technical Tips for Automatic and Secure Oracle Linux Deployments In this session, attendees will learn about how to easily deploy and install Oracle Linux systems using various technologies like Kickstart, Oracle Enterprise Manager OpsCenter, and Oracle VM Templates for applications on Linux. Additionally, the session will share useful Linux security tips and introduce utilities to help with hardening and securely operating an Oracle Linux system. We also have a great session in Oracle Develop track: 6. DTrace for Oracle Linux Initially announced at last year's Oracle Openworld, DTrace for Oracle Linux is now available for the Unbreakable Enterprise Kernel R.2. In this session held by one of the engineers working on the DTrace for Linux port, you will learn how you can use this powerful and flexible framework in your development environment. If you prefer to really have practical experience, don’t miss our two Hands-on-Labs where we will cover: HOL-1 : Oracle Linux Package Management: Configuring and Enabling Services In this session you will be Installing and configuring Oracle VM VirtualBox, importing the Oracle Linux virtual appliance. You will then use the package management on Oracle Linux using RPM and yum. You will also be able to review Ksplice, zero downtime kernel updates that enable you to apply security updates, patches and critical bug fixes without rebooting. HOL-2: Oracle Linux Storage Management with LVM and Device Mapper In this session you will learn about storage management with LVM2, the Linux Logical Volume Manager, Btrfs, preparing block devices, creating physical and logical volumes, creating file systems on top of logical volumes, and resizing file systems dynamically. You will also practice setting up software RAID devices, configuring encrypted block devices. You will also see Oracle Linux and Kpslice in the three demopods we will feature at Exhibition demogrounds. One in MySQL Connect and two in Oracle Openworld. What more do you need to come to San Francisco? Oh, I forgot to mention we also have great weather in fall.. Check out the Content Catalog and register to attend Oracle Linux sessions.

    Read the article

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • Ubuntu 12.04 LTS initramfs-tools dependency issue

    - by Mike
    I know this has been asked several times, but each issue and resolution seems different. I've tried almost everything I could think of, but I can't fix this. I have a VM (VMware I think) running 12.04.03 LTS which has stuck dependencies. The VM is on a rented host, running a live system so I don't want to break it (further). uname -a Linux support 3.5.0-36-generic #57~precise1-Ubuntu SMP Thu Jun 20 18:21:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Some more: sudo apt-get update [sudo] password for tracker: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. initramfs-tools : Depends: initramfs-tools-bin (< 0.99ubuntu13.1.1~) but 0.99ubuntu13.3 is installed E: Unmet dependencies. Try using -f. sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: initramfs-tools The following packages will be upgraded: initramfs-tools 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. Need to get 0 B/50.3 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.1.1~); however: Version of initramfs-tools-bin on system is 0.99ubuntu13.3. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of apparmor: apparmor depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing apparmor (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: initramfs-tools apparmor E: Sub-process /usr/bin/dpkg returned an error code (1) If I look at the policy behind initramfs-tools / bin I get: apt-cache policy initramfs-tools initramfs-tools: Installed: 0.99ubuntu13.1 Candidate: 0.99ubuntu13.3 Version table: 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages *** 0.99ubuntu13.1 0 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages apt-cache policy initramfs-tools-bin initramfs-tools-bin: Installed: 0.99ubuntu13.3 Candidate: 0.99ubuntu13.3 Version table: *** 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages So the issue seems to be I have 0.99ubuntu13.3 for initramfs-tools-bin yet 0.99ubuntu13.1 for initramfs-tools, and can't upgrade to 0.99ubuntu13.3. I've performed apt-get clean/autoclean/install -f/upgrade -f many times but they won't resolve. I can think of only 2 other 'solutions': Edit the dpkg dependency list to trick it into doing the installation with a broken dependency. This seems very dodgy and it would be a last resort Downgrade both initramfs-tools and initramfs-tools-bin to 0.99ubuntu13 from the precise/main sources and hope that would get them in step. However I'm not sure if this will be possible, or whether it would introduce more issues. I'm not sure how this situation arise in the first place. /boot was 96% full; it's now 56% full (it's tiny - 64MB ... this is what I got from the hosting company). Can anyone offer advice please?

    Read the article

  • Windows Azure SDK 1.3 addresses early adopter feedback

    - by Eric Nelson
    At the end of November 2010 we released a new version of the Windows Azure SDK which contains many new features driven by the great feedback of early adopters plus a shiny new portal. New Portal implemented in Silverlight: The new portal is implemented using Silverlight and replaces the (IMHO rather clunky) original HTML + JavaScript portal. It is 100% better although does still have a few bugs. Enjoy! P.S. You can if you wish still use the old portal:   New runtime functionality: The following functionality is now generally available through the Windows Azure SDK and Windows Azure Tools for Visual Studio and the new Windows Azure Management Portal: Elevated Privileges and Full IIS. You can now run a portion or all of your code in Web and Worker roles with elevated administrator privileges. The Web role now provides Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules. Remote Desktop functionality enables you to connect to a running instance of your application or service in order to monitor activity and troubleshoot common problems. Windows Server 2008 R2 Roles: Windows Azure now supports Windows Server 2008 R2 in its Web, worker and VM roles. This new support enables you to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0. New runtime functionality – in beta: Windows Azure Virtual Machine Role: Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. You can move more existing applications to Windows Azure, reducing the need to make costly code or deployment changes. Extra Small Windows Azure Instance, which is priced at $0.05 per compute hour, provides developers with a cost-effective training and development environment. Developers can also use the Extra Small instance to prototype cloud solutions at a lower cost. Windows Azure Connect: (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources, is the first Windows Azure Virtual Network feature that we’re making available as a CTP. You can sign up for any of the betas via the Windows Azure Management Portal. Improved processes and simplified operations New portal! (see above) Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure. New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently. Multiple Service Administrators: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs.   Related Links Please also let us know through Microsoft Platform Ready if and when you intend to build an application using the Windows Azure Platform. Or indeed if you already have (Well done). You will get access to some great benefits if you do (more on that in a future post). It also really helps us better understand the demand out there which directly impacts how we will plan the next six months of activities around the Windows Azure Platform. Visit Microsoft Platform Ready to tell us about your plans for your applications UK based? Interested in the Windows Azure Platform? Join http://ukazure.ning.com Get started with the Windows Azure Platform http://bit.ly/startazure

    Read the article

  • ArchBeat Link-o-Rama Top 10 for August 19-26, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook page for the week of August 19-26, 2012. Now Available: Oracle SQL Developer 3.2 (3.2.09.23) The latest release of Oracle SQl Developer includes UI enhancements, 12c database support, and bug fixes. ADF Tutorial Chapter 3: Creating a Master-Detail taskflow | Yannick Ongena Oracle ACE Yannick Ongena continues his ADF tutorial with a chapter devoted to view layer and using the data control to build pages that allow user to update reference data. GlassFish Community Event at JavaOne 2012 Don't miss out on this exclusive GlassFish Community Event on Sunday, September 30th from 11:00 a.m. – 1:00 p.m. in Moscone South. Register Now! Part of JavaOne 2012. Oracle BI 11g Book Authors – Podcast #9 | Art of Business Intelligence In this home-grown podcast, authors Christian Screen, Haroun Khan, and Adrian Ward talk about their new book, "Oracle Business Intelligence Enterprise Edition 11g: A Hands-On Tutorial," about their sessions at Oracle OpenWorld, and about their ORACLENERD t-shirts. Oracle Service Bus duplicate message check using Coherence | Jan van Zoggel "Giving the fact that every message on our ESB has an unique messageID element in the SOAP header we could store this on disk, database or in memory,"says Jan van Zoggel. "With the help of Oracle Coherence this last option, in memory, is relatively simple." Even simpler with Jan's detailed instructions. Oracle Technology Network Architect Day - Boston - Sept 12 There are easier ways to increase your IT brainpower. Skip the electrodes and register for Oracle Technology Network Architect Day in Boston, September 12, 2012. This free event includes 8 technical sessions, panel Q&A, roundtable discussions—and a free lunch. 8:00 a.m. – 5:00 p.m. at the Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803. Oracle BPM enable BAM | Peter Paul van de Beek "BAM enables you to make decisions based on real-time information gathered from your running processes," says Peter Paul van de Beek. "With BPMN processes you can use the standard Business Indicators that the BPM Suite offers you and use them to with BAM without much extra effort." Sample Application for Switching Application Module Data Sources | Andrejus Baranovskis A sample application and how-to guide from Oracle ACE Director and ADF expert Andrejus Baranovskis. ORCLville: Some Basic BI Thoughts "If we'd stop to consider what business intelligence really is, many of us might grow a different perspective about how we implement enterprise apps," says Oracle ACE Director Floyd Teter. "What if we implemented with an eye to what kind of information we'd like to get from our enterprise apps?" Oracle VM VirtualBox 4.1.20 released |Oracle's Virtualization Blog Oracle VM VirtualBox 4.1.20 was just released at the community and Oracle download sites, reports the Fat Bloke. This is a maintenance release containing bug fixes and stability improvements. Thought for the Day "The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures." — Frederick P. Brooks Source: SoftwareQuotes

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • JavaOne 2012: Nashorn Edition

    - by $utils.escapeXML($entry.author)
    As with my JavaOne 2012: OpenJDK Edition post a while back (now updated to reflect the schedule of the talks), I find it convenient to have my JavaOne schedule ordered by subjects of interest. Beside OpenJDK in all its flavors, another subject I find very exciting is Nashorn. I blogged about the various material on Nashorn in the past, and we interviewed Jim Laskey, the Project Lead on Project Nashorn in the Java Spotlight podcast. So without further ado, here are the JavaOne 2012 talks and BOFs with Nashorn in their title, or abstract:CON5390 - Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM - Monday, Oct 1, 8:30 AM - 9:30 AMThere are many implementations of JavaScript, meant to run either on the JVM or standalone as native code. Both approaches have their respective pros and cons. The Oracle Nashorn JavaScript project is based on the former approach. This presentation goes through the performance work that has gone on in Oracle’s Nashorn JavaScript project to date in order to make JavaScript-to-bytecode generation for execution on the JVM feasible. It shows that the new invoke dynamic bytecode gets us part of the way there but may not quite be enough. What other tricks did the Nashorn project use? The presentation also discusses future directions for increased performance for dynamic languages on the JVM, covering proposed enhancements to both the JVM itself and to the bytecode compiler.CON4082 - Nashorn: JavaScript on the JVM - Monday, Oct 1, 3:00 PM - 4:00 PMThe JavaScript programming language has been experiencing a renaissance of late, driven by the interest in HTML5. Nashorn is a JavaScript engine implemented fully in Java on the JVM. It is based on the Da Vinci Machine (JSR 292) and will be available with JDK 8. This session describes the goals of Project Nashorn, gives a top-level view of how it all works, provides the current status, and demonstrates examples of JavaScript and Java working together.BOF4763 - Meet the Nashorn JavaScript Team - Tuesday, Oct 2, 4:30 PM - 5:15 PMCome to this session to meet the Oracle JavaScript (Project Nashorn) language teamBOF6661 - Nashorn, Node, and Java Persistence - Tuesday, Oct 2, 5:30 PM - 6:15 PMWith Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE. In this session, you’ll learn about Node.jar and how it can be combined with Java EE components such as EclipseLink JPA for rich Java persistence. You’ll also hear about all of Node.jar’s mapping, caching, querying, performance, and scaling features.CON10657 - The Polyglot Java VM and Java Middleware - Thursday, Oct 4, 12:30 PM - 1:30 PMIn this session, Red Hat and Oracle discuss the impact of polyglot programming from their own unique perspectives, examining non-Java languages that utilize Oracle’s Java HotSpot VM. You’ll hear a discussion of topics relating to Ruby, Lisp, and Clojure and the intersection of other languages where they may touch upon individual frameworks and projects, and you’ll get perspectives on JavaScript via the Nashorn Project, an upcoming JavaScript engine, developed fully in Java.CON5251 - Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings - Thursday, Oct 4, 2:00 PM - 3:00 PMProject Nashorn is Oracle’s new JavaScript runtime in Java 8. Being a JavaScript runtime running on the JVM, it provides integration with the underlying runtime by enabling JavaScript objects to manipulate Java objects, implement Java interfaces, and extend Java classes. Nashorn is invokedynamic-based, and for its Java integration, it does away with the concept of wrapper objects in favor of direct virtual machine linking to Java objects’ methods provided by a metaobject protocol, providing much higher performance than what could be expected from a scripting runtime. This session looks at the details of the integration, a topic of interest to other language implementers on the JVM and a wider audience of developers who want to understand how Nashorn works.That's 6 sessions tooting the Nashorn this year at JavaOne, up from 2 last year.

    Read the article

  • Java Mission Control for SE Embedded 8

    - by kshimizu-Oracle
    ????????????Java???·????????????Java Mission Control????Java SE 8 Embedded???????????Java????????????????Java Mission Control?????????JVM?Java????????? CPU?????????? ???????? ?????????? ???????UI???????????????? ????????????????????????????????????????????????????????????(Java Mission Control????????????????????????????????) 1. Java Mission Control??????? Java?????????????? JMX?????(MBean????) ? Java SE Embedded 8?Compact 3?Full JRE?????(???Minimal?VM??????) ????·???? ? Java SE Embedded 8?Full JRE??????(???Minimal?VM??????) ? ???????Java ME 8??????????????? 2. ???????JVM?????     2.1. JMX?????(MBeans???)????? >java -Dcom.sun.management.jmxremote=true               -Dcom.sun.management.jmxremote.port=7091                # ????????              -Dcom.sun.management.jmxremote.authenticate=false   # ????              -Dcom.sun.management.jmxremote.ssl=false                  # SSL??              -jar appliation.jar ? ??????????????????????JVM??????????????????? "-Djava.rmi.server.hostname=192.168.0.20"                     # ?????????IP????/???? ???????????(http://docs.oracle.com/javase/7/docs/technotes/guides/management/faq.html)?5???????????????????????     2.2. ????·????????? JVM????????????????????? "-XX:+UnlockCommercialFeatures -XX:+FlightRecorder" 3. Java Mission Control?????? JDK????????jmc??????????? >"JDK_HOME"/bin/jmc 4. Java Mission Control??JVM??????  Java Mission Control?????????????????????????????????????? - ????????????IP????·??????????????????JVM????????????????????? - ??????????(????·?????)?????????? - ??????????OK??? ????????????????????????????????????????????????????????????Java?????Java Mission Control???????? ??URL) http://www.oracle.com/technetwork/jp/java/javaseproducts/mission-control/index.html http://www.oracle.com/technetwork/jp/java/javaseproducts-old/mission-control/java-mission-control-wp-2008279-ja.pdf http://www.oracle.com/technetwork/java/embedded/resources/tech/java-flight-rec-on-java-se-emb-8-2158734.html

    Read the article

  • Juggling with JDKs on Apple OS X

    - by Blueberry Coder
    I recently got a shiny new MacBook Pro to help me support our ADF Mobile customers. It is really a wonderful piece of hardware, although I am still adjusting to Apple's peculiar keyboard layout. Did you know, for example, that the « delete » key actually performs a « backspace »? But I disgress... As you may know, ADF Mobile development still requires JDeveloper 11gR2, which in turn runs on Java 6. On the other hand, JDeveloper 12c needs JDK 7. I wanted to install both versions, and wasn't sure how to do it.   If you remember, I explained in a previous blog entry how to install JDeveloper 11gR2 on Apple's OS X. The trick was to use the /usr/libexec/java_home command in order to invoke the proper JDK. In this case, I could have done the same thing; the two JDKs can coexist without any problems, since they install in completely different locations. But I wanted more than just installing JDeveloper. I wanted to be able to select my JDK when using the command line as well. On Windows, this is easy, since I keep all my JDKs in a central location. I simply have to move to the appropriate folder or type the folder name in the command I want to execute. Problem is, on OS X, the paths to the JDKs are... let's say convoluted.  Here is the one for Java 6. /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home The Java 7 path is not better, just different. /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home Intuitive, isn't it? Clearly, I needed something better... On OS X, the default command shell is bash. It is possible to configure the shell environment by creating a file named « .profile » in a user's home folder. Thus, I created such a file and put the following inside: export JAVA_7_HOME=$(/usr/libexec/java_home -v1.7) export JAVA_6_HOME=$(/usr/libexec/java_home -v1.6) export JAVA_HOME=$JAVA_7_HOME alias java6='export JAVA_HOME=$JAVA_6_HOME' alias java7='export JAVA_HOME=$JAVA_7_HOME'  The first two lines retrieve the current paths for Java 7 and Java 6 and store them in two environment variables. The third line marks Java 7 as the default. The last two lines create command aliases. Thus, when I type java6, the value for JAVA_HOME is set to JAVA_6_HOME, for example.  I now have an environment which works even better than the one I have on Windows, since I can change my active JDK on a whim. Here a sample, fresh from my terminal window. fdesbien-mac:~ fdesbien$ java6 fdesbien-mac:~ fdesbien$ java -version java version "1.6.0_65" Java(TM) SE Runtime Environment (build 1.6.0_65-b14-462-11M4609) Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-462, mixed mode) fdesbien-mac:~ fdesbien$ fdesbien-mac:~ fdesbien$ java7 fdesbien-mac:~ fdesbien$ java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) fdesbien-mac:~ fdesbien$ Et voilà! Maximum flexibility without downsides, just I like it. 

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >