Search Results

Search found 1054 results on 43 pages for 'reserved'.

Page 36/43 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • On Redirect - Failed to generate a user instance of SQL Server...

    - by Craig Russell
    Hello (this is a long post sorry), I am writing a application in ASP.NET MVC 2 and I have reached a point where I am receiving this error when I connect remotely to my Server. Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. I thought I had worked around this problem locally, as I was getting this error in debug when site was redirected to a baseUrl if a subdomain was invalid using this code: protected override void Initialize(RequestContext requestContext) { string[] host = requestContext.HttpContext.Request.Headers["Host"].Split(':'); _siteProvider.Initialise(host, LiveMeet.Properties.Settings.Default["baseUrl"].ToString()); base.Initialize(requestContext); } protected override void OnActionExecuting(ActionExecutingContext filterContext) { if (Site == null) { string[] host = filterContext.HttpContext.Request.Headers["Host"].Split(':'); string newUrl; if (host.Length == 2) newUrl = "http://sample.local:" + host[1]; else newUrl = "http://sample.local"; Response.Redirect(newUrl, true); } ViewData["Site"] = Site; base.OnActionExecuting(filterContext); } public Site Site { get { return _siteProvider.GetCurrentSite(); } } The Site object is returned from a Provider named siteProvider, this does two checks, once against a database containing a list of all available subdomains, then if that fails to find a valid subdomain, or valid domain name, searches a memory cache of reserved domains, if that doesn't hit then returns a baseUrl where all invalid domains are redirected. locally this worked when I added the true to Response.Redirect, assuming a halting of the current execution and restarting the execution on the browser redirect. What I have found in the stack trace is that the error is thrown on the second attempt to access the database. #region ISiteProvider Members public void Initialise(string[] host, string basehost) { if (host[0].Contains(basehost)) host = host[0].Split('.'); Site getSite = GetSites().WithDomain(host[0]); if (getSite == null) { sites.TryGetValue(host[0], out getSite); } _site = getSite; } public Site GetCurrentSite() { return _site; } public IQueryable<Site> GetSites() { return from p in _repository.groupDomains select new Site { Host = p.domainName, GroupGuid = (Guid)p.groupGuid, IsSubDomain = p.isSubdomain }; } #endregion The Linq query ^^^ is hit first, with a filter of WithDomain, the error isn't thrown till the WithDomain filter is attempted. In summary: The error is hit after the page is redirected, so the first iteration is executing as expected (so permissions on the database are correct, user profiles etc) shortly after the redirect when it filters the database query for the possible domain/subdomain of current redirected page, it errors out.

    Read the article

  • Adding x11vnc as a Solaris SMF service

    - by rojanu
    I am trying add x11vnc as SMF service but cannot get service to start. I tried googling but couldn't find anything that could help me. Here is the startup script #!/sbin/sh # # Copyright (c) 1995, 1997-1999 by Sun Microsystems, Inc. # All rights reserved. # #ident "@(#)x11vnc 1.14 06/11/17 SMI" case "$1" in 'start') #/usr/local/bin/x11vnc -geometry 1280x1024 -noshm -display :0 -ncache 10 -noshm -shared -forever -o /tmp/vnc_remote.log -bg /usr/local/bin/x11vnc -unixpw -ncache 10 -display :0 -noshm -shared -forever -o /tmp/vnc_remote.log ;; 'stop') /usr/bin/pkill -x -u 0 x11vnc ;; *) echo "Usage: $0 { start | stop }" ;; esac exit 0 and here is the manifest file <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='manifest' name='vnc'> <service name='application/x11vnc' type='service' version='0'> <create_default_instance enabled='true'/> <single_instance/> <dependency name='docusp' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/multi-user-server:default'/> </dependency> <exec_method name='start' type='method' exec='/lib/svc/method/x11vnc' timeout_seconds='0'> <method_context/> </exec_method> <exec_method name='stop' type='method' exec=':true' timeout_seconds='10'> <method_context/> </exec_method> <stability value='Evolving' /> <property_group name='startd' type='framework'> <propval name='ignore_error' type='astring' value='core,signal'/> </property_group> </service> </service_bundle> and the log file Usage: /lib/svc/method/x11vnc { start | stop } [ Nov 16 19:35:52 Method "start" exited with status 0 ] [ Nov 16 19:35:52 Stopping because all processes in service exited. ] [ Nov 16 19:35:52 Executing stop method (:kill) ] [ Nov 16 19:35:52 Executing start method ("/lib/svc/method/x11vnc") ] Usage: /lib/svc/method/x11vnc { start | stop } [ Nov 16 19:35:52 Method "start" exited with status 0 ] [ Nov 16 19:35:52 Stopping because all processes in service exited. ] [ Nov 16 19:35:52 Executing stop method (:kill) ] [ Nov 16 19:35:52 Executing start method ("/lib/svc/method/x11vnc") ] Usage: /lib/svc/method/x11vnc { start | stop } [ Nov 16 19:35:52 Method "start" exited with status 0 ] [ Nov 16 19:35:52 Stopping because all processes in service exited. ] [ Nov 16 19:35:52 Executing stop method (:kill) ] [ Nov 16 19:35:52 Restarting too quickly, changing state to maintenance ] Any Ideas?

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • synchronized in java - Proper use

    - by ZoharYosef
    I'm building a simple program to use in multi processes (Threads). My question is more to understand - when I have to use a reserved word synchronized? Do I need to use this word in any method that affects the bone variables? I know I can put it on any method that is not static, but I want to understand more. thank you! here is the code: public class Container { // *** data members *** public static final int INIT_SIZE=10; // the first (init) size of the set. public static final int RESCALE=10; // the re-scale factor of this set. private int _sp=0; public Object[] _data; /************ Constructors ************/ public Container(){ _sp=0; _data = new Object[INIT_SIZE]; } public Container(Container other) { // copy constructor this(); for(int i=0;i<other.size();i++) this.add(other.at(i)); } /** return true is this collection is empty, else return false. */ public synchronized boolean isEmpty() {return _sp==0;} /** add an Object to this set */ public synchronized void add (Object p){ if (_sp==_data.length) rescale(RESCALE); _data[_sp] = p; // shellow copy semantic. _sp++; } /** returns the actual amount of Objects contained in this collection */ public synchronized int size() {return _sp;} /** returns true if this container contains an element which is equals to ob */ public synchronized boolean isMember(Object ob) { return get(ob)!=-1; } /** return the index of the first object which equals ob, if none returns -1 */ public synchronized int get(Object ob) { int ans=-1; for(int i=0;i<size();i=i+1) if(at(i).equals(ob)) return i; return ans; } /** returns the element located at the ind place in this container (null if out of range) */ public synchronized Object at(int p){ if (p>=0 && p<size()) return _data[p]; else return null; }

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Anunciando Windows Azure Mobile Services (Serviços Móveis da Windows Azure)

    - by Leniel Macaferi
    Estou animado para anunciar uma nova capacidade que estamos adicionando à Windows Azure hoje: Windows Azure Mobile Services (Serviços Móveis da Windows Azure) Os Serviços Móveis da Windows Azure tornam incrivelmente fácil conectar um backend da nuvem escalável em suas aplicações clientes e móveis. Estes serviços permitem que você facilmente armazene dados estruturados na nuvem que podem abranger dispositivos e usuários, integrando tais dados com autenticação do usuário. Você também pode enviar atualizações para os clientes através de notificações push. O lançamento de hoje permite que você adicione essas capacidades em qualquer aplicação Windows 8 em literalmente minutos, e fornece uma maneira super produtiva para que você transforme rapidamente suas ideias em aplicações. Também vamos adicionar suporte para permitir esses mesmos cenários para o Windows Phone, iOS e dispositivos Android em breve. Leia este tutorial inicial (em Inglês) que mostra como você pode construir (em menos de 5 minutos) uma simples aplicação Windows 8 "Todo List" (Lista de Tarefas) que é habilitada para a nuvem usando os Serviços Móveis da Windows Azure. Ou assista este vídeo (em Inglês) onde mostro como construí-la passo a passo. Começando Se você ainda não possui uma conta na Windows Azure, você pode se inscrever usando uma assinatura gratuita sem compromisso. Uma vez inscrito, clique na seção "preview features" logo abaixo da tab "account" (conta) no website www.windowsazure.com e ative sua conta para ter acesso ao preview dos "Mobile Services" (Serviços Móveis). Instruções sobre como ativar estes novos recursos podem ser encontradas aqui (em Inglês). Depois de habilitar os Serviços Móveis, entre no Portal da Windows Azure, clique no botão "New" (Novo) e escolha o novo ícone "Mobile Services" (Serviços Móveis) para criar o seu primeiro backend móvel. Uma vez criado, você verá uma página de início rápido como a mostrada a seguir com instruções sobre como conectar o seu serviço móvel a uma aplicação Windows 8 cliente já existente, a qual você já tenha começado a implementar, ou como criar e conectar uma nova aplicação Windows 8 cliente ao backend móvel: Leia este tutorial inicial (em Inglês) com explicações passo a passo sobre como construir (em menos de 5 minutos) uma simples aplicação Windows 8 "Todo List" (Lista de Tarefas) que armazena os dados na Windows Azure. Armazenamento Dados na Nuvem Armazenar dados na nuvem com os Serviços Móveis da Windows Azure é incrivelmente fácil. Quando você cria um Serviço Móvel da Windows Azure, nós automaticamente o associamos com um banco de dados SQL dentro da Windows Azure. O backend do Serviço Móvel da Windows Azure então fornece suporte nativo para permitir que aplicações remotas armazenem e recuperem dados com segurança através dele (usando end-points REST seguros, através de um formato OData baseado em JSON) - sem que você tenha que escrever ou implantar qualquer código personalizado no servidor. Suporte integrado para o gerenciamento do backend é fornecido dentro do Portal da Windows Azure para a criação de novas tabelas, navegação pelos dados, criação de índices, e controle de permissões de acesso. Isto torna incrivelmente fácil conectar aplicações clientes na nuvem, e permite que os desenvolvedores de aplicações desktop que não têm muito conhecimento sobre código que roda no servidor sejam produtivos desde o início. Eles podem se concentrar na construção da experiência da aplicação cliente, tirando vantagem dos Serviços Móveis da Windows Azure para fornecer os serviços de backend da nuvem que se façam necessários.  A seguir está um exemplo de código Windows 8 C#/XAML do lado do cliente que poderia ser usado para consultar os dados de um Serviço Móvel da Windows Azure. Desenvolvedores de aplicações que rodam no cliente e que usam C# podem escrever consultas como esta usando LINQ e objetos fortemente tipados POCO, os quais serão mais tarde traduzidos em consultas HTTP REST que são executadas em um Serviço Móvel da Windows Azure. Os desenvolvedores não precisam escrever ou implantar qualquer código personalizado no lado do servidor para permitir que o código do lado do cliente mostrado a seguir seja executado de forma assíncrona preenchendo a interface (UI) do cliente: Como os Serviços Móveis fazem parte da Windows Azure, os desenvolvedores podem escolher mais tarde se querem aumentar ou estender sua solução adicionando funcionalidades no lado do servidor bem como lógica de negócio mais avançada, se quiserem. Isso proporciona o máximo de flexibilidade, e permite que os desenvolvedores ampliem suas soluções para atender qualquer necessidade. Autenticação do Usuário e Notificações Push Os Serviços Móveis da Windows Azure também tornam incrivelmente fácil integrar autenticação/autorização de usuários e notificações push em suas aplicações. Você pode usar esses recursos para habilitar autenticação e controlar as permissões de acesso aos dados que você armazena na nuvem de uma maneira granular. Você também pode enviar notificações push para os usuários/dispositivos quando os dados são alterados. Os Serviços Móveis da Windows Azure suportam o conceito de "scripts do servidor" (pequenos pedaços de script que são executados no servidor em resposta a ações), os quais tornam a habilitação desses cenários muito fácil. A seguir estão links para alguns tutoriais (em Inglês) no formato passo a passo para cenários comuns de autenticação/autorização/push que você pode utilizar com os Serviços Móveis da Windows Azure e aplicações Windows 8: Habilitando Autenticação do Usuário Autorizando Usuários  Começando com Push Notifications Push Notifications para múltiplos Usuários Gerencie e Monitore seu Serviço Móvel Assim como todos os outros serviços na Windows Azure, você pode monitorar o uso e as métricas do backend de seu Serviço Móvel usando a tab "Dashboard" dentro do Portal da Windows Azure. A tab Dashboard fornece uma visão de monitoramento que mostra as chamadas de API, largura de banda e ciclos de CPU do servidor consumidos pelo seu Serviço Móvel da Windows Azure. Você também usar a tab "Logs" dentro do portal para ver mensagens de erro.  Isto torna fácil monitorar e controlar como sua aplicação está funcionando. Aumente a Capacidade de acordo com o Crescimento do Seu Negócio Os Serviços Móveis da Windows Azure agora permitem que cada cliente da Windows Azure crie e execute até 10 Serviços Móveis de forma gratuita, em um ambiente de hospedagem compartilhado com múltiplos banco de dados (onde o backend do seu Serviço Móvel será um dos vários aplicativos sendo executados em um conjunto compartilhado de recursos do servidor). Isso fornece uma maneira fácil de começar a implementar seus projetos sem nenhum custo algum (nota: cada conta gratuita da Windows Azure também inclui um banco de dados SQL de 1GB que você pode usar com qualquer número de aplicações ou Serviços Móveis da Windows Azure). Se sua aplicação cliente se tornar popular, você pode clicar na tab "Scale" (Aumentar Capacidade) do seu Serviço Móvel e mudar de "Shared" (Compartilhado) para o modo "Reserved" (Reservado). Isso permite que você possa isolar suas aplicações de maneira que você seja o único cliente dentro de uma máquina virtual. Isso permite que você dimensione elasticamente a quantidade de recursos que suas aplicações consomem - permitindo que você aumente (ou diminua) sua capacidade de acordo com o tráfego de dados: Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Isso permite um modelo super flexível que é ideal para novos cenários de aplicações móveis, bem como para novas empresas que estão apenas começando. Resumo Eu só toquei na superfície do que você pode fazer com os Serviços Móveis da Windows Azure - há muito mais recursos para explorar. Com os Serviços Móveis da Windows Azure, você será capaz de construir cenários de aplicações móveis mais rápido do que nunca, permitindo experiências de usuário ainda melhores - conectando suas aplicações clientes na nuvem. Visite o centro de desenvolvimento dos Serviços Móveis da Windows Azure (em Inglês) para aprender mais, e construa sua primeira aplicação Windows 8 conectada à Windows Azure hoje. E leia este tutorial inicial (em Inglês) com explicações passo a passo que mostram como você pode construir (em menos de 5 minutos) uma simples aplicação Windows 8 "Todo List" (Lista de Tarefas) habilitada para a nuvem usando os Serviços Móveis da Windows Azure. Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 3

    - by SQLOS Team
    In parts 1 and 2 of this series we looked at the basics of Hyper-V Dynamic Memory and SQL Server memory management. In this part Serdar looks at configuration guidelines for SQL Server memory management. Part 3: Configuration Guidelines for Hyper-V Dynamic Memory and SQL Server Now that we understand SQL Server Memory Management and Hyper-V Dynamic Memory basics, let’s take a look at general configuration guidelines in order to utilize benefits of Hyper-V Dynamic Memory in your SQL Server VMs. Requirements Host Operating System Requirements Hyper-V Dynamic Memory feature is introduced with Windows Server 2008 R2 SP1. Therefore in order to use Dynamic Memory for your virtual machines, you need to have Windows Server 2008 R2 SP1 or Microsoft Hyper-V Server 2008 R2 SP1 in your Hyper-V host. Guest Operating System Requirements In addition to this Dynamic Memory is only supported in Standard, Web, Enterprise and Datacenter editions of windows running inside VMs. Make sure that your VM is running one of these editions. For additional requirements on each operating system see “Dynamic Memory Configuration Guidelines” here. SQL Server Requirements All versions of SQL Server support Hyper-V Dynamic Memory. However, only certain editions of SQL Server are aware of dynamically changing system memory. To have a truly dynamic environment for your SQL Server VMs make sure that you are running one of the SQL Server editions listed below: ·         SQL Server 2005 Enterprise ·         SQL Server 2008 Enterprise / Datacenter Editions ·         SQL Server 2008 R2 Enterprise / Datacenter Editions Configuration guidelines for other versions of SQL Server are covered below in the FAQ section. Guidelines for configuring Dynamic Memory Parameters Here is how to configure Dynamic Memory for your SQL VMs in a nutshell: Hyper-V Dynamic Memory Parameter Recommendation Startup RAM 1 GB + SQL Min Server Memory Maximum RAM > SQL Max Server Memory Memory Buffer % 5 Memory Weight Based on performance needs   Startup RAM In order to ensure that your SQL Server VMs can start correctly, ensure that Startup RAM is higher than configured SQL Min Server Memory for your VMs. Otherwise SQL Server service will need to do paging in order to start since it will not be able to see enough memory during startup. Also note that Startup Memory will always be reserved for your VMs. This will guarantee a certain level of performance for your SQL Servers, however setting this too high will limit the consolidation benefits you’ll get out of your virtualization environment. Maximum RAM This one is obvious. If you’ve configured SQL Max Server Memory for your SQL Server, make sure that Dynamic Memory Maximum RAM configuration is higher than this value. Otherwise your SQL Server will not grow to memory values higher than the value configured for Dynamic Memory. Memory Buffer % Memory buffer configuration is used to provision file cache to virtual machines in order to improve performance. Due to the fact that SQL Server is managing its own buffer pool, Memory Buffer setting should be configured to the lowest value possible, 5%. Configuring a higher memory buffer will prevent low resource notifications from Windows Memory Manager and it will prevent reclaiming memory from SQL Server VMs. Memory Weight Memory weight configuration defines the importance of memory to a VM. Configure higher values for the VMs that have higher performance requirements. VMs with higher memory weight will have more memory under high memory pressure conditions on your host. Questions and Answers Q1 – Which SQL Server memory model is best for Dynamic Memory? The best SQL Server model for Dynamic Memory is “Locked Page Memory Model”. This memory model ensures that SQL Server memory is never paged out and it’s also adaptive to dynamically changing memory in the system. This will be extremely useful when Dynamic Memory is attempting to remove memory from SQL Server VMs ensuring no SQL Server memory is paged out. You can find instructions on configuring “Locked Page Memory Model” for your SQL Servers here. Q2 – What about other SQL Server Editions, how should I configure Dynamic Memory for them? Other editions of SQL Server do not adapt to dynamically changing environments. They will determine how much memory they should allocate during startup and don’t change this value afterwards. Therefore make sure that you configure a higher startup memory for your VM because that will be all the memory that SQL Server utilize Tune Maximum Memory and Memory Buffer based on the other workloads running on the system. If there are no other workloads consider using Static Memory for these editions. Q3 – What if I have multiple SQL Server instances in a VM? Having multiple SQL Server instances in a VM is not a general recommendation for predictable performance, manageability and isolation. In order to achieve a predictable behavior make sure that you configure SQL Min Server Memory and SQL Max Server Memory for each instance in the VM. And make sure that: ·         Dynamic Memory Startup Memory is greater than the sum of SQL Min Server Memory values for the instances in the VM ·         Dynamic Memory Maximum Memory is greater than the sum of SQL Max Server Memory values for the instances in the VM Q4 – I’m using Large Page Memory Model for my SQL Server. Can I still use Dynamic Memory? The short answer is no. SQL Server does not dynamically change its memory size when configured with Large Page Memory Model. In virtualized environments Hyper-V provides large page support by default. Most of the time, Large Page Memory Model doesn’t bring any benefits to a SQL Server if it’s running in virtualized environments. Q5 – How do I monitor SQL performance when I’m trying Dynamic Memory on my VMs? Use the performance counters below to monitor memory performance for SQL Server: Process - Working Set: This counter is available in the VM via process performance counters. It represents the actual amount of physical memory being used by SQL Server process in the VM. SQL Server – Buffer Cache Hit Ratio: This counter is available in the VM via SQL Server counters. This represents the paging being done by SQL Server. A rate of 90% or higher is desirable. Conclusion These blog posts are a quick start to a story that will be developing more in the near future. We’re still continuing our testing and investigations to provide more detailed configuration guidelines with example performance numbers with a white paper in the upcoming months. Now it’s time to give SQL Server and Hyper-V Dynamic Memory a try. Use this guidelines to kick-start your environment. See what you think about it and let us know of your experiences. - Serdar Sutay Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • Invitación a evento de Oracle sobre Transformación del CPD

    - by Eloy M. Rodríguez
    Ahora que se acaba el año y se van dejando atrás los últimos empujones a los temas que hay que cerrar, es un buen momento para hacer un pequeño alto en el camino y asistir a este evento que organiza Oracle y reflexionar sobre los enfoques innovadores que se plantean ya que la actual situación reclama actuaciones diferentes y, a veces, el árbol tapa al bosque. Adjunto la invitación oficial, con la agenda y acceso al registro automático.. Oracle Transformación del Centro de Datos: Acelerando la adopción eficaz de la Cloud Transformación del Centro de Datos: Acelerando la adopción eficaz de la Cloud Únase a nosotros en el evento Transformación del Centro de Datos y descubra cómo implementar un centro de datos que esté diseñado para promover la innovación, ofreciendo un mayor rendimiento y fiabilidad, simplificando la gestión y reduciendo significativamente los costes. Venga a conocer los últimas novedades tecnológicas aplicables a su negocio que Oracle acaba de anunciar en Oracle Open World, su conferencia mundial por excelencia, como el Supercluster, el nuevo procesador T4 y las soluciones de Storage Pillar. Sólo Oracle diseña hardware y software, para que estos trabajen conjuntamente desde las aplicaciones al disco, lo que permite reducir la complejidad, impulsar la productividad en toda la empresa y acelerar la innovación empresarial. Únase a nosotros para descubrir cómo transformar su centro de datos para maximizar la eficacia y restablecer IT como una ventaja competitiva del negocio de su empresa. Comparta ideas y experiencias con los mejores expertos y ejecutivos y descubra como: Acelerar la transformación del centro de datos a través de la tecnología que proporciona un rendimiento espectacular y una mayor eficiencia Reducir costes, acelerar y simplificar el despliegue y la consolidación de bases de datos y aplicaciones Optimizar el rendimiento a través de la utilización de los productos Oracle con la tecnología de virtualización incorporada sin coste adicional Minimizar el riesgo durante los despliegues de cloud empresarial con el apoyo de los productos líderes del mercado en materia de seguridad Aumentar la productividad y responder rápidamente a los cambios del mercado con las soluciones optimizadas de Oracle Transforme su centro de datos para optimizar el rendimiento, incrementar la agilidad de su negocio y maximizar sus inversiones en IT. No deje pasar esta oportunidad e inscríbase hoy mismo a este evento que tendrá lugar el próximo 14 de diciembre en Madrid. Inscríbase hoy mismo Para más información, contacte con [email protected] Inscríbase ahora 14 de diciembre de 2011 09:00 - 16:00 CÍRCULO DE BELLAS ARTES DE MADRID C/ Alcalá, 42 28014 MadridEntrada por c/ Marqués de Casa Riera Programa 09:00 Registro 09:30 Bienvenida e IntroducciónJoão Taron, Vice-President & Hardware Leader, Oracle Iberia 09:45 Estrategia OracleGerhard Schlabschi, Business Development Director, Oracle Systems EMEA 10:20 Como transformar su centro de datos eficazmente Manuel Vidal, Director Systems Presales, Oracle Iberia 10:45 Caso de Éxito 11:15 Café 11:45 Consolidacion en Private Cloud Rendimiento extremo con Oracle Exalogic Elastic Cloud & Exadata Lisa Martinez,Business Development Manager, Oracle  Aceleración de las aplicaciones empresariales con SPARC SuperClusters y servidores empresariales T4                                     Carlos Soler Ibanez, Principal Sales Consultant, Oracle 13:15 Almuerzo 14:15 Optimización del Centro de Datos Cómo maximizar el potencial de su infrastructura con sistemas virtualizados de Oracle Javier Cerrada, Senior Sales Consultant, Oracle Optimización de los recursos de almacenamiento con Data Tiering Miguel Angel Borrega, Storage Architect, Oracle 15:00 Gestión del Centro de Datos Oracle Solaris 11                                                                             Javier Cerrada, Senior Sales Consultant, Oracle Enterprise Manager 12c                                                                     Jesus Robles, Master Principal Sales Consultant, Oracle 15:45 Preguntas & respuestas 16:00 Conversaciones con sus interlocutores de Oracle & sorteo de iPAD If you are an employee or official of a government organization, please click here for important ethics information regarding this event. Copyright © 2011, Oracle and/or its affiliates. All rights reserved. Contacte con nosotros | Notas Legales y | Política de Privacidad

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • Windows Azure VMs - New "Stopped" VM Options Provide Cost-effective Flexibility for On-Demand Workloads

    - by KeithMayer
    Originally posted on: http://geekswithblogs.net/KeithMayer/archive/2013/06/22/windows-azure-vms---new-stopped-vm-options-provide-cost-effective.aspxDidn’t make it to TechEd this year? Don’t worry!  This month, we’ll be releasing a new article series that highlights the Best of TechEd announcements and technical information for IT Pros.  Today’s article focuses on a new, much-heralded enhancement to Windows Azure Infrastructure Services to make it more cost-effective for spinning VMs up and down on-demand on the Windows Azure cloud platform. NEW! VMs that are shutdown from the Windows Azure Management Portal will no longer continue to accumulate compute charges while stopped! Previous to this enhancement being available, the Azure platform maintained fabric resource reservations for VMs, even in a shutdown state, to ensure consistent resource availability when starting those VMs in the future.  And, this meant that VMs had to be exported and completely deprovisioned when not in use to avoid compute charges. In this article, I'll provide more details on the scenarios that this enhancement best fits, and I'll also review the new options and considerations that we now have for performing safe shutdowns of Windows Azure VMs. Which scenarios does the new enhancement best fit? Being able to easily shutdown VMs from the Windows Azure Management Portal without continued compute charges is a great enhancement for certain cloud use cases, such as: On-demand dev/test/lab environments - Freely start and stop lab VMs so that they are only accumulating compute charges when being actively used.  "Bursting" load-balanced web applications - Provision a number of load-balanced VMs, but keep the minimum number of VMs running to support "normal" loads. Easily start-up the remaining VMs only when needed to support peak loads. Disaster Recovery - Start-up "cold" VMs when needed to recover from disaster scenarios. BUT ... there is a consideration to keep in mind when using the Windows Azure Management Portal to shutdown VMs: although performing a VM shutdown via the Windows Azure Management Portal causes that VM to no longer accumulate compute charges, it also deallocates the VM from fabric resources to which it was previously assigned.  These fabric resources include compute resources such as virtual CPU cores and memory, as well as network resources, such as IP addresses.  This means that when the VM is later started after being shutdown from the portal, the VM could be assigned a different IP address or placed on a different compute node within the fabric. In some cases, you may want to shutdown VMs using the old approach, where fabric resource assignments are maintained while the VM is in a shutdown state.  Specifically, you may wish to do this when temporarily shutting down or restarting a "7x24" VM as part of a maintenance activity.  Good news - you can still revert back to the old VM shutdown behavior when necessary by using the alternate VM shutdown approaches listed below.  Let's walk through each approach for performing a VM Shutdown action on Windows Azure so that we can understand the benefits and considerations of each... How many ways can I shutdown a VM? In Windows Azure Infrastructure Services, there's three general ways that can be used to safely shutdown VMs: Shutdown VM via Windows Azure Management Portal Shutdown Guest Operating System inside the VM Stop VM via Windows PowerShell using Windows Azure PowerShell Module Although each of these options performs a safe shutdown of the guest operation system and the VM itself, each option handles the VM shutdown end state differently. Shutdown VM via Windows Azure Management Portal When clicking the Shutdown button at the bottom of the Virtual Machines page in the Windows Azure Management Portal, the VM is safely shutdown and "deallocated" from fabric resources.  Shutdown button on Virtual Machines page in Windows Azure Management Portal  When the shutdown process completes, the VM will be shown on the Virtual Machines page with a "Stopped ( Deallocated )" status as shown in the figure below. Virtual Machine in a "Stopped (Deallocated)" Status "Deallocated" means that the VM configuration is no longer being actively associated with fabric resources, such as virtual CPUs, memory and networks. In this state, the VM will not continue to allocate compute charges, but since fabric resources are deallocated, the VM could receive a different internal IP address ( called "Dynamic IPs" or "DIPs" in Windows Azure ) the next time it is started.  TIP: If you are leveraging this shutdown option and consistency of DIPs is important to applications running inside your VMs, you should consider using virtual networks with your VMs.  Virtual networks permit you to assign a specific IP Address Space for use with VMs that are assigned to that virtual network.  As long as you start VMs in the same order in which they were originally provisioned, each VM should be reassigned the same DIP that it was previously using. What about consistency of External IP Addresses? Great question! External IP addresses ( called "Virtual IPs" or "VIPs" in Windows Azure ) are associated with the cloud service in which one or more Windows Azure VMs are running.  As long as at least 1 VM inside a cloud service remains in a "Running" state, the VIP assigned to a cloud service will be preserved.  If all VMs inside a cloud service are in a "Stopped ( Deallocated )" status, then the cloud service may receive a different VIP when VMs are next restarted. TIP: If consistency of VIPs is important for the cloud services in which you are running VMs, consider keeping one VM inside each cloud service in the alternate VM shutdown state listed below to preserve the VIP associated with the cloud service. Shutdown Guest Operating System inside the VM When performing a Guest OS shutdown or restart ( ie., a shutdown or restart operation initiated from the Guest OS running inside the VM ), the VM configuration will not be deallocated from fabric resources. In the figure below, the VM has been shutdown from within the Guest OS and is shown with a "Stopped" VM status rather than the "Stopped ( Deallocated )" VM status that was shown in the previous figure. Note that it may require a few minutes for the Windows Azure Management Portal to reflect that the VM is in a "Stopped" state in this scenario, because we are performing an OS shutdown inside the VM rather than through an Azure management endpoint. Virtual Machine in a "Stopped" Status VMs shown in a "Stopped" status will continue to accumulate compute charges, because fabric resources are still being reserved for these VMs.  However, this also means that DIPs and VIPs are preserved for VMs in this state, so you don't have to worry about VMs and cloud services getting different IP addresses when they are started in the future. Stop VM via Windows PowerShell In the latest version of the Windows Azure PowerShell Module, a new -StayProvisioned parameter has been added to the Stop-AzureVM cmdlet. This new parameter provides the flexibility to choose the VM configuration end result when stopping VMs using PowerShell: When running the Stop-AzureVM cmdlet without the -StayProvisioned parameter specified, the VM will be safely stopped and deallocated; that is, the VM will be left in a "Stopped ( Deallocated )" status just like the end result when a VM Shutdown operation is performed via the Windows Azure Management Portal.  When running the Stop-AzureVM cmdlet with the -StayProvisioned parameter specified, the VM will be safely stopped but fabric resource reservations will be preserved; that is the VM will be left in a "Stopped" status just like the end result when performing a Guest OS shutdown operation. So, with PowerShell, you can choose how Windows Azure should handle VM configuration and fabric resource reservations when stopping VMs on a case-by-case basis. TIP: It's important to note that the -StayProvisioned parameter is only available in the latest version of the Windows Azure PowerShell Module.  So, if you've previously downloaded this module, be sure to download and install the latest version to get this new functionality. Want to Learn More about Windows Azure Infrastructure Services? To learn more about Windows Azure Infrastructure Services, be sure to check-out these additional FREE resources: Become our next "Early Expert"! Complete the Early Experts "Cloud Quest" and build a multi-VM lab network in the cloud for FREE!  Build some cool scenarios! Check out our list of over 20+ Step-by-Step Lab Guides based on key scenarios that IT Pros are implementing on Windows Azure Infrastructure Services TODAY!  Looking forward to seeing you in the Cloud! - Keith Build Your Lab! Download Windows Server 2012 Don’t Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines Want to Get Certified? Join our Windows Server 2012 "Early Experts" Study Group

    Read the article

  • 15 Kasim 2012 Oracle Day

    - by TUFEKCIOGLU,FATIH
       15.Kasim'da Harbiye Istanbul Kongre Merkezi'nde düzenlenecek Oracle Day'e ait etkinlik bilgileri : Oracle Day etkinlik bilgileri için tiklayiniz    En Son Teknolojiden Faydalanin: Inovasyona ve Rekabete Zaman Birakin 15 Kasim 2012 Bulut Bilisim, Mobilite, Sosyal Medya ve Büyük Veri, bildigimiz dünyayi yeniden tanimliyor. Bu teknolojileri kurumuna ilk getirenlerden biri olun; daha hizli yeni ürün ve hizmet gelistirme, müsteri deneyimini iyilestirme ve yeni inovatif is modellerini hayata geçirme firsati yakalayarak rekabetteki konumunuzu güçlendirin. Oracle ve is ortaklari bu noktada size, teknolojik yenilikleri kurumunuza uyarlamanizda yardim ederken, sizin de piyasadaki degisimlerden rakiplerinizden önce avantaj elde etmenizi saglar. Oracle'in, birlikte çalismak için tasarlanmis olan yazilim ve donanimlarda en yeni teknolojileri kullanarak, bilgi teknolojilerini nasil sadelestirdigini ögrenmek için Oracle Day'de bize katilin. Oracle Day'de: Oracle'in Bulut Bilisim, Büyük Veri, Sosyal Medya, is uygulamalari çözümleri hakkinda bilgi edinme, Basarili is dönüsümleri hakkinda örnek basari hikayelerini dinleme, Sizinle ayni zorluklari tecrübe eden sektör çalisanlariyla biraraya gelme, Oracle uzmanlari ve is ortaklari ile tanisma ve yeni ürün tanitimlarini izleme, Oracle OpenWorld'den en yeni ürün bilgilerini edinme firsatini kaçirmayin... Saygilarimizla, Oracle Türkiye Hemen Kaydolun! Platin Sponsor Istanbul Kongre Merkezi Taskisla Caddesi Harbiye 34367 Istanbul / Türkiye 15 Kasim 2012, Persembe 08:30 - 18:30 LCV: [email protected] oracle.com/oracleday Bizi takip edin: #oracleday   Oracle Is Ortagi Müsteri Basari Hikayesi TROUG Sunum Ingilizce'dir  Günün Ajandasi 08:45-09:30 Kayit 09:30-10:00 Hos Geldiniz Filiz Dogan, Genel Müdür, Oracle Türkiye 10:00-10:30 Navigating Complexity by Simplifying I.T. Andrew Mendelsohn, Kidemli Baskan Yardimcisi, Oracle Veritabani Sunucu Teknolojileri, Oracle           10:30-11:00 Dönüsümsel Bulut Yolculugu Ilker Kuruöz, CIO, Turkcell 11:00-11:20 Yeni Dönemde Veri Merkezlerinin Olmazsa Olmazlari Yalim Eristiren, Genel Müdür Yardimcisi, Intel 11:20-11:30 Slimfit Feyza Narli, Is Çözümleri Direktörü, Innova 11:30-12:00 Java ile Inovasyon Cuma Yigit, Teknik Mimar, Etiya Yusuf Tok, Java Grup Yöneticisi, OBSS Ersun Engel, Satis Müdürü, Oracle 12:00-12:10  Kahve Molasi       1. SALON 2. SALON 3. SALON 4. SALON 5. SALON 6. SALON 7. SALON 8. SALON 9. SALON 10. SALON   Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Is Süreçlerinde Degisim Daha Fazla Veri, Daha Hizli Sonuç: Isinizi Analitik Çözümlerle Güçlendirin Peki Ama Nasil? Bulut Uygulayicilari için Çözüm Haritasi Is Uygulamalarinizda Inovasyonun Gücünü Kullanin Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri I Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri II Oracle Finansal Hizmetler - Core Banking and Analytical Solutions Oracle User Group (TROUG) 12:10-12:40 Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Tekfen Ceyhan Çelik Fabrikasi Maliyet Yönetimi ve Üretim Takibi Daha Fazla Veriyle, Daha Hizli Hareket: Yaraticiligi Aksiyonla Güçlendirme Architect Your Cloud: A Blueprint for Cloud Builders Technology Strategies that Drive Business Excellence: Get Social. Be Mobile. Run Cloud. Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Innovate with Oracle - Virtual Banking and Self Service Channels What's Next for Oracle Database?   Oracle Innova Oracle Oracle Oracle Oracle Oracle Oracle 12:40-13:40 Ögle Yemegi 13:40-14:10 Uygulamalariniz Artik Bulutta 21. Yüzyilda Finans: Potansiyeli Kullanin - Sonuçlara Ulasin! "Düsünce Hizinda" Intel Islemcili Oracle Büyük Veri ve Is Analitigi Çözümleri Deniz Seviyesinden Bulutlara I CRM'inizi Sosyallestirin: Telaura Sosyal CRM Yeni Nesil Veri Merkezinde Trend: Sadelik Abone Bilgi Yönetim Sistemi (ABYS) Yüksek Oracle Veri Tabani Performansi - Oracle Veri Tabani Oracle Veri Depolama Sistemi ile Entegre Oldugunda Sigortacilikta Finansal Transformasyon ve Entegre Veri Ambari Çözümleri SQL/PLSQL Yeni Özellikler   Oracle Oracle Intel Oracle Etiya Oracle Inspirit Oracle Oraturk TROUG 14:10-14:20  Kahve Molasi 14:20-14:50 Satis ve Pazarlamada Sosyal Mecralar Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Tukas'in Analitik Yolculugu Deniz Seviyesinden Bulutlara II Loupe: IP Tabanli Servisler için Proaktif Izleme Veri Merkezinizdeki Riskleri Ortadan Kaldirin Oracle iAS to WebLogic Migrasyonu Oracle ZFS Storage Appliance - Turkcell Deneyimleri Analytical Transformation - Risk and Finance Together to Address the Regulatory Changes Today and Tomorrow Veri Madenciligi Veritabaninda Yapilir: Uygulamalariyla Oracle R Enterprise ve Oracle Data Mining Opsiyonu   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Gtech Oracle Netas Oracle OBSS Turkcell&Gantek Oracle TROUG 14:50-15:00  Kahve Molasi 15:00-15:30 Yetenek Yönetiminde Entegre Çözümler: Taleo ile Ise Alim Artik Daha Kolay Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Büyük Veri & Exalytics - Exadata'nin Gelecek Rotasi - Bütünlesik (Engineered) Sistemler'de Ücretsiz Platin Hizmetleri Aksigorta Oracle ATS (Application Testing Suite) ile Uygulamalarini Nasil Test Ediyor? Üstün Performans ve Esneklik ile Servis Seviyenizi Arttirin Akilli Belediyecilik Uygulamalarinda Oracle BPM ile Süreç Yönetimi Teyp ile Uçtan Uca Yedekleme Çözümleri Connecting with Customers to Enhance Revenue Generation: Unleashing the Power of an Enterprise Revenue Management and Billing Solution Günümüzün Uygulama Mimarisi Sorunlari ve Çözüm Önerileri   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Oracle Oracle Aksigorta Oracle Sampas Remivac Oracle TROUG 15:30-15:40  Kahve Molasi 15:40-16:10 JD Edwards Yeni Sürüm ile Satis Agi Yönetimi Oracle Policy Automation ile Türkçe Merkezi Is Kurallari Yönetimi Oracle BI ile Kurumsal Karne Çözümleri Turkcell Süperbulut ile Yazilim Artik Hizmetinizde Bütünlesik (Engineered) Sistemler Artik SAP Müsterilerinde de Fark Yaratiyor Yeni Nesil Veri Merkezi Olusturma: Denenmis ve Ispatlanmis Yöntemler Türk Telekom SOA Projesi Basari Hikayesi Yapi Kredi Sigorta Uygulamalarinda Son Kullanici Deneyimini Nasil Izliyor? 2013 NFC Trendleri Karagöz ile Hacivat Veri Tabaninda   TupperWare & Akademi Danismanlik Oracle Oracle Turkcell Oracle Oracle Türk Telekom Yapi Kredi Sigorta Smartsoft TROUG 16:10-16:20  Kahve Molasi 16:20-16:50 Tutarli, Tekil Veriye Yolculuk: Oracle Ana Veri Yönetimi Turkcell/Superonline'da Varlik Yönetimi Her Tür Verinin Endeca ile Rahat Analizi Bulut Bilisim'de Güvenlik Nasil Saglanir? Rekabette Kazanmak: GoldenGate ile Dogru Kararlari Rakiplerinizden Önce Verin. Veri Merkeziniz için Bulut Altyapi Stratejileri Oracle Orkestrasi: Çok Sesli Yönetime Kulak Verin Lojistik Zekasi / Horoz Lojistik Basari Hikayesi Bütünlesik Sistemler (Engineered Systems) ile Yüksek Performansli Java Uygulamalari International Growth - Helping the Banks Standardize Overseas Operations Oracle Big Data   Oracle Turkcell Oracle Oracle Oracle Oracle Kora & Horoz Lojistik Oracle Oracle TROUG 16:50-18:30  Kokteyl     Eger bir kamu kurumunun/kurulusunun çalisani veya görevlisi iseniz, bu etkinlige iliskin önemli etik kurallara iliskin bilgi için lütfen buraya tiklayiniz Copyright 2012, Oracle and/or its affiliates. All rights reserved. Bize Ulasin | Yasal Uyarilar | Gizlilik Beyani

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Announcing Windows Azure Mobile Services

    - by ScottGu
    I’m excited to announce a new capability we are adding to Windows Azure today: Windows Azure Mobile Services Windows Azure Mobile Services makes it incredibly easy to connect a scalable cloud backend to your client and mobile applications.  It allows you to easily store structured data in the cloud that can span both devices and users, integrate it with user authentication, as well as send out updates to clients via push notifications. Today’s release enables you to add these capabilities to any Windows 8 app in literally minutes, and provides a super productive way for you to quickly build out your app ideas.  We’ll also be adding support to enable these same scenarios for Windows Phone, iOS, and Android devices soon. Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services.  Or watch this video of me showing how to do it step by step. Getting Started If you don’t already have a Windows Azure account, you can sign up for a no-obligation Free Trial.  Once you are signed-up, click the “preview features” section under the “account” tab of the www.windowsazure.com website and enable your account to support the “Mobile Services” preview.   Instructions on how to enable this can be found here. Once you have the mobile services preview enabled, log into the Windows Azure Portal, click the “New” button and choose the new “Mobile Services” icon to create your first mobile backend.  Once created, you’ll see a quick-start page like below with instructions on how to connect your mobile service to an existing Windows 8 client app you have already started working on, or how to create and connect a brand-new Windows 8 client app with it: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app  that stores data in Windows Azure. Storing Data in the Cloud Storing data in the cloud with Windows Azure Mobile Services is incredibly easy.  When you create a Windows Azure Mobile Service, we automatically associate it with a SQL Database inside Windows Azure.  The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it (using secure REST end-points utilizing a JSON-based ODATA format) – without you having to write or deploy any custom server code.  Built-in management support is provided within the Windows Azure portal for creating new tables, browsing data, setting indexes, and controlling access permissions. This makes it incredibly easy to connect client applications to the cloud, and enables client developers who don’t have a server-code background to be productive from the very beginning.  They can instead focus on building the client app experience, and leverage Windows Azure Mobile Services to provide the cloud backend services they require.  Below is an example of client-side Windows 8 C#/XAML code that could be used to query data from a Windows Azure Mobile Service.  Client-side C# developers can write queries like this using LINQ and strongly typed POCO objects, which are then translated into HTTP REST queries that run against a Windows Azure Mobile Service.   Developers don’t have to write or deploy any custom server-side code in order to enable client-side code below to execute and asynchronously populate their client UI: Because Mobile Services is part of Windows Azure, developers can later choose to augment or extend their initial solution and add custom server functionality and more advanced logic if they want.  This provides maximum flexibility, and enables developers to grow and extend their solutions to meet any needs. User Authentication and Push Notifications Windows Azure Mobile Services also make it incredibly easy to integrate user authentication/authorization and push notifications within your applications.  You can use these capabilities to enable authentication and fine grain access control permissions to the data you store in the cloud, as well as to trigger push notifications to users/devices when the data changes.  Windows Azure Mobile Services supports the concept of “server scripts” (small chunks of server-side script that executes in response to actions) that make it really easy to enable these scenarios. Below are some tutorials that walkthrough common authentication/authorization/push scenarios you can do with Windows Azure Mobile Services and Windows 8 apps: Enabling User Authentication Authorizing Users  Get Started with Push Notifications Push Notifications to multiple Users Manage and Monitor your Mobile Service Just like with every other service in Windows Azure, you can monitor usage and metrics of your mobile service backend using the “Dashboard” tab within the Windows Azure Portal. The dashboard tab provides a built-in monitoring view of the API calls, Bandwidth, and server CPU cycles of your Windows Azure Mobile Service.   You can also use the “Logs” tab within the portal to review error messages.  This makes it easy to monitor and track how your application is doing. Scale Up as Your Business Grows Windows Azure Mobile Services now allows every Windows Azure customer to create and run up to 10 Mobile Services in a free, shared/multi-tenant hosting environment (where your mobile backend will be one of multiple apps running on a shared set of server resources).  This provides an easy way to get started on projects at no cost beyond the database you connect your Windows Azure Mobile Service to (note: each Windows Azure free trial account also includes a 1GB SQL Database that you can use with any number of apps or Windows Azure Mobile Services). If your client application becomes popular, you can click the “Scale” tab of your Mobile Service and switch from “Shared” to “Reserved” mode.  Doing so allows you to isolate your apps so that you are the only customer within a virtual machine.  This allows you to elastically scale the amount of resources your apps use – allowing you to scale-up (or scale-down) your capacity as your traffic grows: With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need.  This enables a super flexible model that is ideal for new mobile app scenarios, as well as startups who are just getting going.  Summary I’ve only scratched the surface of what you can do with Windows Azure Mobile Services – there are a lot more features to explore.  With Windows Azure Mobile Services you’ll be able to build mobile app experiences faster than ever, and enable even better user experiences – by connecting your client apps to the cloud. Visit the Windows Azure Mobile Services development center to learn more, and build your first Windows 8 app connected with Windows Azure today.  And read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • iptables DNS resolution

    - by Favolas
    I have a virtual machine with Fedora 19 acting as a router. This machine as an interface (p8p1) with the IP 172.16.1.254 that is connected to another machine (IP 172.16.1.1) that's simulating the external network. I've installed snort 2.9.2.2, applied the snortsam-2.9.2.2.diff.gz patch and installed snortsam 2.70 on the routermachine In snort.conf besides altering some RULE_PATH I believe I've only added the following line to the file. output alert_fwsam: 127.0.0.1:898/password After doing this two comands: ifconfig p8p1 promisc /usr/local/snort/bin/snort -v -i p8p1 If I ping from the external network to the router IP, I can see the info about the pings. One of the rules that I have is icmp-info.rules that as this single line: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP-INFO Echo Reply"; icode:0; itype:0; classtype:misc-activity; sid:408; rev:6;fwsam: src, 5 minutes;) snortsam.conf as this data: defaultkey password accept localhost keyinterval 30 minutes dontblock 192.168.1.1 # rede local rollbackhosts 50 rollbackthreshold 20 / 30 secs rollbacksleeptime 1 minute logfile /var/log/snort/snortsam.log loglevel 3 daemon nothreads # linha importante para gerar os bloqueios via iptables iptables p8p1 LOG bindip 127.0.0.1 Now I run this command: /usr/local/snort/bin/snort -u snort -i p8p1 -c /etc/snort/snort.conf -l /var/log/snort -Dq Terminal gives this message: Spawning daemon child... My daemon child 2080 lives... Daemon parent exiting (0) and when I runsnortsam in terminal i got this: SnortSam, v 2.70. Copyright (c) 2001-2009 Frank Knobbe . All rights reserved. Plugin 'fwsam': v 2.5, by Frank Knobbe Plugin 'fwexec': v 2.7, by Frank Knobbe Plugin 'pix': v 2.9, by Frank Knobbe Plugin 'ciscoacl': v 2.12, by Ali Basel <[email protected]> Plugin 'cisconullroute': v 2.5, by Frank Knobbe Plugin 'cisconullroute2': v 2.2, by Wouter de Jong <[email protected]> Plugin 'netscreen': v 2.10, by Frank Knobbe Plugin 'ipchains': v 2.8, by Hector A. Paterno <[email protected]> Plugin 'iptables': v 2.9, by Fabrizio Tivano <[email protected]>, Luis Marichal <[email protected]> Plugin 'ebtables': v 2.4, by Bruno Scatolin <[email protected]> Plugin 'watchguard': v 2.7, by Thomas Maier <[email protected]> Plugin 'email': v 2.12, by Frank Knobbe Plugin 'email-blocks-only': v 2.12, by Frank Knobbe Plugin 'snmpinterfacedown': v 2.3, by Ali BASEL <[email protected]> Plugin 'forward': v 2.8, by Frank Knobbe Parsing config file /etc/snortsam.conf... Linking plugin 'iptables'... Checking for existing state file "/var/db/snortsam.state". Found. Reading state file. Starting to listen for Snort alerts. and snortsam.log as an entry like this 2013/10/25, 10:15:17, -, 1, snortsam, Starting to listen for Snort alerts. Now, from the external machine I do ping 172.16.1.254 and it starts showing the info and an alert file is created in /var/log/snort/ that as the info about the PINGS. Something like: [**] [1:408:6] ICMP-INFO Echo Reply [**] [Classification: Misc activity] [Priority: 3] 10/25-10:35:16.061319 172.16.1.254 -> 172.16.1.1 ICMP TTL:64 TOS:0x0 ID:38720 IpLen:20 DgmLen:84 Type:0 Code:0 ID:1389 Seq:1 ECHO REPLY Also, if I run instead /usr/local/snort/bin/snort snort -v -i p8p1 i got this message: Running in packet dump mode --== Initializing Snort ==-- Initializing Output Plugins! Snort BPF option: snort pcap DAQ configured to passive. The DAQ version does not support reload. Acquiring network traffic from "p8p1". ERROR: Can't set DAQ BPF filter to 'snort' (pcap_daq_set_filter: pcap_compile: syntax error)! Fatal Error, Quitting.. So, this are my questions: Shouldn't snortsam block the PING? Is that DAQ error causing the problem? If so, How can I solve it?

    Read the article

  • How to create a simple adf dashboard application with EJB 3.0

    - by Rodrigues, Raphael
    In this month's Oracle Magazine, Frank Nimphius wrote a very good article about an Oracle ADF Faces dashboard application to support persistent user personalization. You can read this entire article clicking here. The idea in this article is to extend the dashboard application. My idea here is to create a similar dashboard application, but instead ADF BC model layer, I'm intending to use EJB3.0. There are just a one small trick here and I'll show you. I'm using the HR usual oracle schema. The steps are: 1. Create a ADF Fusion Application with EJB as a layer model 2. Generate the entities from table (I'm using Department and Employees only) 3. Create a new Session Bean. I called it: HRSessionEJB 4. Create a new method like that: public List getAllDepartmentsHavingEmployees(){ JpaEntityManager jpaEntityManager = (JpaEntityManager)em.getDelegate(); Query query = jpaEntityManager.createNamedQuery("Departments.allDepartmentsHavingEmployees"); JavaBeanResult.setQueryResultClass(query, AggregatedDepartment.class); return query.getResultList(); } 5. In the Departments entity, create a new native query annotation: @Entity @NamedQueries( { @NamedQuery(name = "Departments.findAll", query = "select o from Departments o") }) @NamedNativeQueries({ @NamedNativeQuery(name="Departments.allDepartmentsHavingEmployees", query = "select e.department_id, d.department_name , sum(e.salary), avg(e.salary) , max(e.salary), min(e.salary) from departments d , employees e where d.department_id = e.department_id group by e.department_id, d.department_name")}) public class Departments implements Serializable {...} 6. Create a new POJO called AggregatedDepartment: package oramag.sample.dashboard.model; import java.io.Serializable; import java.math.BigDecimal; public class AggregatedDepartment implements Serializable{ @SuppressWarnings("compatibility:5167698678781240729") private static final long serialVersionUID = 1L; private BigDecimal departmentId; private String departmentName; private BigDecimal sum; private BigDecimal avg; private BigDecimal max; private BigDecimal min; public AggregatedDepartment() { super(); } public AggregatedDepartment(BigDecimal departmentId, String departmentName, BigDecimal sum, BigDecimal avg, BigDecimal max, BigDecimal min) { super(); this.departmentId = departmentId; this.departmentName = departmentName; this.sum = sum; this.avg = avg; this.max = max; this.min = min; } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setSum(BigDecimal sum) { this.sum = sum; } public BigDecimal getSum() { return sum; } public void setAvg(BigDecimal avg) { this.avg = avg; } public BigDecimal getAvg() { return avg; } public void setMax(BigDecimal max) { this.max = max; } public BigDecimal getMax() { return max; } public void setMin(BigDecimal min) { this.min = min; } public BigDecimal getMin() { return min; } } 7. Create the util java class called JavaBeanResult. The function of this class is to configure a native SQL query to return POJOs in a single line of code using the utility class. Credits: http://onpersistence.blogspot.com.br/2010/07/eclipselink-jpa-native-constructor.html package oramag.sample.dashboard.model.util; /******************************************************************************* * Copyright (c) 2010 Oracle. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * @author shsmith ******************************************************************************/ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.exceptions.ConversionException; import org.eclipse.persistence.internal.helper.ConversionManager; import org.eclipse.persistence.internal.sessions.AbstractRecord; import org.eclipse.persistence.internal.sessions.AbstractSession; import org.eclipse.persistence.jpa.JpaHelper; import org.eclipse.persistence.queries.DatabaseQuery; import org.eclipse.persistence.queries.QueryRedirector; import org.eclipse.persistence.sessions.Record; import org.eclipse.persistence.sessions.Session; /*** * This class is a simple query redirector that intercepts the result of a * native query and builds an instance of the specified JavaBean class from each * result row. The order of the selected columns musts match the JavaBean class * constructor arguments order. * * To configure a JavaBeanResult on a native SQL query use: * JavaBeanResult.setQueryResultClass(query, SomeBeanClass.class); * where query is either a JPA SQL Query or native EclipseLink DatabaseQuery. * * @author shsmith * */ public final class JavaBeanResult implements QueryRedirector { private static final long serialVersionUID = 3025874987115503731L; protected Class resultClass; public static void setQueryResultClass(Query query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); DatabaseQuery databaseQuery = JpaHelper.getDatabaseQuery(query); databaseQuery.setRedirector(javaBeanResult); } public static void setQueryResultClass(DatabaseQuery query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); query.setRedirector(javaBeanResult); } protected JavaBeanResult(Class resultClass) { this.resultClass = resultClass; } @SuppressWarnings("unchecked") public Object invokeQuery(DatabaseQuery query, Record arguments, Session session) { List results = new ArrayList(); try { Constructor[] constructors = resultClass.getDeclaredConstructors(); Constructor javaBeanClassConstructor = null; // (Constructor) resultClass.getDeclaredConstructors()[0]; Class[] constructorParameterTypes = null; // javaBeanClassConstructor.getParameterTypes(); List rows = (List) query.execute( (AbstractSession) session, (AbstractRecord) arguments); for (Object[] columns : rows) { boolean found = false; for (Constructor constructor : constructors) { javaBeanClassConstructor = constructor; constructorParameterTypes = javaBeanClassConstructor.getParameterTypes(); if (columns.length == constructorParameterTypes.length) { found = true; break; } // if (columns.length != constructorParameterTypes.length) { // throw new ColumnParameterNumberMismatchException( // resultClass); // } } if (!found) throw new ColumnParameterNumberMismatchException( resultClass); Object[] constructorArgs = new Object[constructorParameterTypes.length]; for (int j = 0; j < columns.length; j++) { Object columnValue = columns[j]; Class parameterType = constructorParameterTypes[j]; // convert the column value to the correct type--if possible constructorArgs[j] = ConversionManager.getDefaultManager() .convertObject(columnValue, parameterType); } results.add(javaBeanClassConstructor.newInstance(constructorArgs)); } } catch (ConversionException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalArgumentException e) { throw new ColumnParameterMismatchException(e); } catch (InstantiationException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalAccessException e) { throw new ColumnParameterMismatchException(e); } catch (InvocationTargetException e) { throw new ColumnParameterMismatchException(e); } return results; } public final class ColumnParameterMismatchException extends RuntimeException { private static final long serialVersionUID = 4752000720859502868L; public ColumnParameterMismatchException(Throwable t) { super( "Exception while processing query results-ensure column order matches constructor parameter order", t); } } public final class ColumnParameterNumberMismatchException extends RuntimeException { private static final long serialVersionUID = 1776794744797667755L; public ColumnParameterNumberMismatchException(Class clazz) { super( "Number of selected columns does not match number of constructor arguments for: " + clazz.getName()); } } } 8. Create the DataControl and a jsf or jspx page 9. Drag allDepartmentsHavingEmployees from DataControl and drop in your page 10. Choose Graph > Type: Bar (Normal) > any layout 11. In the wizard screen, Bars label, adds: sum, avg, max, min. In the X Axis label, adds: departmentName, and click in OK button 12. Run the page, the result is showed below: You can download the workspace here . It was using the latest jdeveloper version 11.1.2.2.

    Read the article

  • UppercuT &ndash; Custom Extensions Now With PowerShell and Ruby

    - by Robz / Fervent Coder
    Arguably, one of the most powerful features of UppercuT (UC) is the ability to extend any step of the build process with a pre, post, or replace hook. This customization is done in a separate location from the build so you can upgrade without wondering if you broke the build. There is a hook before each step of the build has run. There is a hook after. And back to power again, there is a replacement hook. If you don’t like what the step is doing and/or you want to replace it’s entire functionality, you just drop a custom replacement extension and UppercuT will perform the custom step instead. Up until recently all custom hooks had to be written in NAnt. Now they are a little sweeter because you no longer need to use NAnt to extend UC if you don’t want to. You can use PowerShell. Or Ruby.   Let that sink in for a moment. You don’t have to even need to interact with NAnt at all now. Extension Points On the wiki, all of the extension points are shown. The basic idea is that you would put whatever customization you are doing in a separate folder named build.custom. Each step Let’s take a look at all we can customize: The start point is default.build. It calls build.custom/default.pre.build if it exists, then it runs build/default.build (normal tasks) OR build.custom/default.replace.build if it exists, and finally build.custom/default.post.build if it exists. Every step below runs with the same extension points but changes on the file name it is looking for. NOTE: If you include default.replace.build, nothing else will run because everything is called from default.build.    * policyChecks.step    * versionBuilder.step NOTE: If you include build.custom/versionBuilder.replace.step, the items below will not run.      - svn.step, tfs.step, or git.step (the custom tasks for these need to go in build.custom/versioners)    * generateBuildInfo.step    * compile.step    * environmentBuilder.step    * analyze.step NOTE: If you include build.custom/analyze.replace.step, the items below will not run.      - test.step (the custom tasks for this need to go in build.custom/analyzers) NOTE: If you include build.custom/analyzers/test.replace.step, the items below will not run.        + mbunit2.step, gallio.step, or nunit.step (the custom tasks for these need to go in build.custom/analyzers)      - ncover.step (the custom tasks for this need to go in build.custom/analyzers)      - ndepend.step (the custom tasks for this need to go in build.custom/analyzers)      - moma.step (the custom tasks for this need to go in build.custom/analyzers)    * package.step NOTE: If you include build.custom/package.replace.step, the items below will not run.      - deploymentBuilder.step Customize UppercuT Builds With PowerShell UppercuT can now be extended with PowerShell (PS). To customize any extension point with PS, just add .ps1 to the end of the file name and write your custom tasks in PowerShell. If you are not signing your scripts you will need to change a setting in the UppercuT.config file. This does impose a security risk, because this allows PS to now run any PS script. This setting stays that way on ANY machine that runs the build until manually changed by someone. I’m not responsible if you mess up your machine or anyone else’s by doing this. You’ve been warned. Now that you are fully aware of any security holes you may open and are okay with that, let’s move on. Let’s create a file called default.replace.build.ps1 in the build.custom folder. Open that file in notepad and let’s add this to it: write-host "hello - I'm a custom task written in Powershell!" Now, let’s run build.bat. You could get some PSake action going here. I won’t dive into that in this post though. Customize UppercuT Builds With Ruby If you want to customize any extension point with Ruby, just add .rb to the end of the file name and write your custom tasks in Ruby.  Let’s write a custom ruby task for UC. If you were thinking it would be the same as the one we just wrote for PS, you’d be right! In the build.custom folder, lets create a file called default.replace.build.rb. Open that file in notepad and let’s put this in there: puts "I'm a custom ruby task!" Now, let’s run build.bat again. That’s chunky bacon. UppercuT and Albacore.NET Just for fun, I wanted to see if I could replace the compile.step with a Rake task. Not just any rake task, Albacore’s msbuild task. Albacore is a suite of rake tasks brought about by Derick Bailey to make building .NET with Rake easier. It has quite a bit of support with developers that are using Rake to build code. In my build.custom folder, I drop a compile.replace.step.rb. I also put in a separate file that will contain my Albacore rake task and I call that compile.rb. What are the contents of compile.replace.step.rb? rake = 'rake' arguments= '-f ' + Dir.pwd + '/../build.custom/compile.rb' #puts "Calling #{rake} " + arguments system("#{rake} " + arguments) Since the custom extensions call ruby, we have to shell back out and call rake. That’s what we are doing here. We also realize that ruby is called from the build folder, so we need to back out and dive into the build.custom folder to find the file that is technically next to us. What are the contents of compile.rb? require 'rubygems' require 'fileutils' require 'albacore' task :default => [:compile] puts "Using Ruby to compile UppercuT with Albacore Tasks" desc 'Compile the source' msbuild :compile do |msb| msb.properties = { :configuration => :Release, :outputpath => '../../build_output/UppercuT' } msb.targets [:clean, :build] msb.verbosity = "quiet" msb.path_to_command = 'c:/Windows/Microsoft.NET/Framework/v3.5/MSBuild.exe' msb.solution = '../uppercut.sln' end We are using the msbuild task here. We change the output path to the build_output/UppercuT folder. The output path has “../../” because this is based on every project. We could grab the current directory and then point the task specifically to a folder if we have projects that are at different levels. We want the verbosity to be quiet so we set that as well. So what kind of output do you get for this? Let’s run build.bat custom_tasks_replace:      [echo] Running custom tasks instead of normal tasks if C:\code\uppercut\build\..\build.custom\compile.replace.step exists.      [exec] (in C:/code/uppercut/build)      [exec] Using Ruby to compile UppercuT with Albacore Tasks      [exec] Microsoft (R) Build Engine Version 3.5.30729.4926      [exec] [Microsoft .NET Framework, Version 2.0.50727.4927]      [exec] Copyright (C) Microsoft Corporation 2007. All rights reserved. If you think this is awesome, you’d be right!   With this knowledge you shall build.

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • Varnish POST problem "9 FetchError c backend write error: 11" for application/x-www-form-urlencoded content

    - by ompap
    Cutting a longish story short, we have managed to get a more precise error out of Varnishlog. Varnishlog tells us that we are sending a 31 TxRequest - POST 31 TxHeader - Content-Type: application/x-www-form-urlencoded but we are getting 9 FetchError c backend write error: 11 31 BackendClose - [backend name] 9 VCL_call c error 9 VCL_return c deliver 9 Length c 488 9 VCL_call c deliver 9 VCL_return c deliver 9 TxProtocol c HTTP/1.1 9 TxStatus c 503 We still do not know what this is exactly, but apparently Content-Type: application/x-www-form-urlencoded is not getting through as it should. Help still needed, please! Original message below. The title was "Varnish not letting Joomla users to log in - 503 guru meditation error", but I changed it to get more attention to the problem and not to the symptoms. Hello, We have a production site for a local newspaper which is currently behind an Apache reverse proxy, basicly the site on one server and the other being reserved as a reverse proxy only (well, there is more but that has no relevance here). Apache as a reverse proxy works, but could be faster. We want to change the reverse proxy to use Varnish instead of Apache on an Ubuntu 10.4 Server. The Varnish is version 2.10 installed directly from Ubuntu repos. Ubuntu 10.4 uses PHP 5.3.2. For anonymous surfers the site works wonderfully with Varnish. So far we can get very good speed out of Varnish, we just have a few problems with logging in or out. The big one is, that the users cannot log in: they get a Varnish 503 error page every time. The logs do not reveal the cause. It feels as if the request would never leave Varnish. So we are merely guessing - not a strong starting point. We have gone through what has been suggested on various plces on the web. We have increased the timeouts to backend xxx { .host = "xxx.xx"; .port = "http"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } but we seem to get the 503 guru error page much faster than that, as in approx. 5 seconds. We have increased the Varnish headers size to 128 in daemon. In vcl_recv we have if (req.http.Authenticate || req.http.Authorization) { return(pass); } and in vcl_fetch ## auhtentication handling if (req.http.Authenticate || req.http.Authorization) { return(pass); } We do not strip cookies. We have tried to make sure that error pages are not cached. As said above, we cannot see anything in the backend Apache logs, apparently it never gets asked for Joomla user authentication. Varnish does not seem to get much mentioning with connection to Joomla. (We cannot dump Joomla, that selection has been done and we just have to live with what we have been given) Has anyone a working Varnish - Joomla combination? Thanks for reading. Please help. We need some hints - desperately. Any suggestions? ompap

    Read the article

  • SQL SERVER – SSMS: Disk Usage Report

    - by Pinal Dave
    Let us start with humor!  I think we the series on various reports, we come to a logical point. We covered all the reports at server level. This means the reports we saw were targeted towards activities that are related to instance level operations. These are mostly like how a doctor diagnoses a patient. At this point I am reminded of a dialog which I read somewhere: Patient: Doc, It hurts when I touch my head. Doc: Ok, go on. What else have you experienced? Patient: It hurts even when I touch my eye, it hurts when I touch my arms, it even hurts when I touch my feet, etc. Doc: Hmmm … Patient: I feel it hurts when I touch anywhere in my body. Doc: Ahh … now I get it. You need a plaster to your finger John. Sometimes the server level gives an indicator to what is happening in the system, but we need to get to the root cause for a specific database. So, this is the first blog in series where we would start discussing about database level reports. To launch database level reports, expand selected server in Object Explorer, expand the Databases folder, and then right-click any database for which we want to look at reports. From the menu, select Reports, then Standard Reports, and then any of database level reports. In this blog, we would talk about four “disk” reports because they are similar: Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Disk Usage This report shows multiple information about the database. Let us discuss them one by one.  We have divided the output into 5 different sections. Section 1 shows the high level summary of the database. It shows the space used by database files (mdf and ldf). Under the hood, the report uses, various DMVs and DBCC Commands, it is using sys.data_spaces and DBCC SHOWFILESTATS. Section 2 and 3 are pie charts. One for data file allocation and another for the transaction log file. Pie chart for “Data Files Space Usage (%)” shows space consumed data, indexes, allocated to the SQL Server database, and unallocated space which is allocated to the SQL Server database but not yet filled with anything. “Transaction Log Space Usage (%)” used DBCC SQLPERF (LOGSPACE) and shows how much empty space we have in the physical transaction log file. Section 4 shows the data from Default Trace and looks at Event IDs 92, 93, 94, 95 which are for “Data File Auto Grow”, “Log File Auto Grow”, “Data File Auto Shrink” and “Log File Auto Shrink” respectively. Here is an expanded view for that section. If default trace is not enabled, then this section would be replaced by the message “Trace Log is disabled” as highlighted below. Section 5 of the report uses DBCC SHOWFILESTATS to get information. Here is the enhanced version of that section. This shows the physical layout of the file. In case you have In-Memory Objects in the database (from SQL Server 2014), then report would show information about those as well. Here is the screenshot taken for a different database, which has In-Memory table. I have highlighted new things which are only shown for in-memory database. The new sections which are highlighted above are using sys.dm_db_xtp_checkpoint_files, sys.database_files and sys.data_spaces. The new type for in-memory OLTP is ‘FX’ in sys.data_space. The next set of reports is targeted to get information about a table and its storage. These reports can answer questions like: Which is the biggest table in the database? How many rows we have in table? Is there any table which has a lot of reserved space but its unused? Which partition of the table is having more data? Disk Usage by Top Tables This report provides detailed data on the utilization of disk space by top 1000 tables within the Database. The report does not provide data for memory optimized tables. Disk Usage by Table This report is same as earlier report with few difference. First Report shows only 1000 rows First Report does order by values in DMV sys.dm_db_partition_stats whereas second one does it based on name of the table. Both of the reports have interactive sort facility. We can click on any column header and change the sorting order of data. Disk Usage by Partition This report shows the distribution of the data in table based on partition in the table. This is so similar to previous output with the partition details now. Here is the query taken from profiler. SELECT row_number() OVER (ORDER BY a1.used_page_count DESC, a1.index_id) AS row_number ,      (dense_rank() OVER (ORDER BY a5.name, a2.name))%2 AS l1 ,      a1.OBJECT_ID ,      a5.name AS [schema] ,       a2.name ,       a1.index_id ,       a3.name AS index_name ,       a3.type_desc ,       a1.partition_number ,       a1.used_page_count * 8 AS total_used_pages ,       a1.reserved_page_count * 8 AS total_reserved_pages ,       a1.row_count FROM sys.dm_db_partition_stats a1 INNER JOIN sys.all_objects a2  ON ( a1.OBJECT_ID = a2.OBJECT_ID) AND a1.OBJECT_ID NOT IN (SELECT OBJECT_ID FROM sys.tables WHERE is_memory_optimized = 1) INNER JOIN sys.schemas a5 ON (a5.schema_id = a2.schema_id) LEFT OUTER JOIN  sys.indexes a3  ON ( (a1.OBJECT_ID = a3.OBJECT_ID) AND (a1.index_id = a3.index_id) ) WHERE (SELECT MAX(DISTINCT partition_number) FROM sys.dm_db_partition_stats a4 WHERE (a4.OBJECT_ID = a1.OBJECT_ID)) >= 1 AND a2.TYPE <> N'S' AND  a2.TYPE <> N'IT' ORDER BY a5.name ASC, a2.name ASC, a1.index_id, a1.used_page_count DESC, a1.partition_number Using all of the above reports, you should be able to get the usage of database files and also space used by tables. I think this is too much disk information for a single blog and I hope you have used them in the past to get data. Do let me know if you found anything interesting using these reports in your environments. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • What's up with LDoms: Part 2 - Creating a first, simple guest

    - by Stefan Hinker
    Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain.  We saw how resources were put aside for guest systems and what infrastructure we need for them.  With that, we are now ready to create a first, very simple guest domain.  In this first example, we'll keep things very simple.  Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest.  It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port.  CPU and RAM are easy.  The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain.  This is very much like plugging a cable into a computer system on one end and a network switch on the other.  For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak.  And then a mapping between that storage and the guest domain, giving it access to that virtual disk.  For this example, we'll use a ZFS volume for the backend.  We'll discuss what other options there are for this and how to chose the right one in a later article.  Here we go: root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars That's all, mars is now ready to power on.  There are just three commands between us and the OK prompt of mars:  We have to "bind" the domain, start it and connect to its console.  Binding is the process where the hypervisor actually puts all the pieces that we've configured together.  If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else).  Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP.  By default, the domain will then try to boot right away.  If we don't want that, we can set "auto-boot?" to false.  Finally, we'll use telnet to connect to the console of our newly created guest.  The output of "ldm list" shows us what port has been assigned to mars.  By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here. root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m mars active -t---- 5000 8 8G 12% 1s root@sun # telnet localhost 5000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok We're done, mars is ready to install Solaris, preferably using AI, of course ;-)  But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here: {0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root /virtual-devices@100/channel-devices@200/disk@0 net0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked.  Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started.  The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image.  The actual devices these aliases point to are shown with the command "devalias".  Here, we have one line for both "disk" and "net".  The device paths speak for themselves.  Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device.  These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk".  Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity.  This should be enough to get you going.  I will cover all the more advanced features and a little more theoretical background in several follow-on articles.  For some background reading, I'd recommend the following links: LDoms 2.2 Admin Guide: Setting up Guest Domains Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

    Read the article

  • Pfsense 2.1 OpenVPN can't reach servers on the LAN

    - by Lucas Kauffman
    I have a small network set up like this: I have a Pfsense for connecting my servers to the WAN, they are using NAT from the LAN - WAN. I have an OpenVPN server using TAP to allow remote workers to be put on the same LAN network as the servers. They connect through the WAN IP to the OVPN interface. The LAN interface also servers as the gateway for the servers to get internet connection and has an IP of 10.25.255.254 The OVPN Interface and the LAN interface are bridged in BR0 Server A has an IP of 10.25.255.1 and is able to connect the internet Client A is connecting through the VPN and is assigned an IP address on its TAP interface of 10.25.24.1 (I reserved a /24 within the 10.25.0.0/16 for VPN clients) Firewall currently allows any-any connection OVPN towards LAN and vice versa Currently when I connect, all routes seem fine on the client side: Destination Gateway Genmask Flags Metric Ref Use Iface 300.300.300.300 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.25.0.0 10.25.255.254 255.255.0.0 UG 0 0 0 tap0 10.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tap0 0.0.0.0 300.300.300.300 0.0.0.0 UG 0 0 0 eth0 I can ping the LAN interface: root@server:# ping 10.25.255.254 PING 10.25.255.254 (10.25.255.254) 56(84) bytes of data. 64 bytes from 10.25.255.254: icmp_req=1 ttl=64 time=7.65 ms 64 bytes from 10.25.255.254: icmp_req=2 ttl=64 time=7.49 ms 64 bytes from 10.25.255.254: icmp_req=3 ttl=64 time=7.69 ms 64 bytes from 10.25.255.254: icmp_req=4 ttl=64 time=7.31 ms 64 bytes from 10.25.255.254: icmp_req=5 ttl=64 time=7.52 ms 64 bytes from 10.25.255.254: icmp_req=6 ttl=64 time=7.42 ms But I can't ping past the LAN interface: root@server:# ping 10.25.255.1 PING 10.25.255.1 (10.25.255.1) 56(84) bytes of data. From 10.25.255.254: icmp_seq=1 Redirect Host(New nexthop: 10.25.255.1) From 10.25.255.254: icmp_seq=2 Redirect Host(New nexthop: 10.25.255.1) I ran a tcpdump on my em1 interface (LAN interface which has the IP of 10.25.255.254) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em1, link-type EN10MB (Ethernet), capture size 96 bytes 08:21:13.449222 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 10, length 64 08:21:13.458211 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:14.450541 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 11, length 64 08:21:14.458431 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:15.451794 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 12, length 64 08:21:15.458530 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:16.453203 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 13, length 64 So traffic is reaching the LAN interface, but it's not getting passed it. But no answer from the 10.25.255.1 host. I'm not sure what I'm missing.

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • Creating an SMF service for mercurial web server

    - by Chris W Beal
    I'm working on a project at the moment, which has a number of contributers. We're managing the project gate (which is stand alone) with mercurial. We want to have an easy way of seeing the changelog, so we can show management what is going on.  Luckily mercurial provides a basic web server which allows you to see the changes, and drill in to change sets. This can be run as a daemon, but as it was running on our build server, every time it was rebooted, someone needed to remember to start the process again. This is of course a classic usage of SMF. Now I'm not an experienced person at writing SMF services, so it took me 1/2 an hour or so to figure it out the first time. But going forward I should know what I'm doing a bit better. I did reference this doc extensively. Taking a step back, the command to start the mercurial web server is $ hg serve -p <port number> -d So we somehow need to get SMF to run that command for us. In the simplest form, SMF services are really made up of two components. The manifest Usually lives in /var/svc/manifest somewhere Can be imported from any location The method Usually live in /lib/svc/method I simply put the script straight in that directory. Not very repeatable, but it worked Can take an argument of start, stop, or refresh Lets start with the manifest. This looks pretty complex, but all it's doing is describing the service name, the dependencies, the start and stop methods, and some properties. The properties can be by instance, that is to say I could have multiple hg serve processes handling different mercurial projects, on different ports simultaneously Here is the manifest I wrote. I stole extensively from the examples in the Documentation. So my manifest looks like this $ cat hg-serve.xml <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type='manifest' name='hg-serve'> <service name='application/network/hg-serve' type='service' version='1'> <dependency name='network' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/network:default' /> </dependency> <exec_method type='method' name='start' exec='/lib/svc/method/hg-serve %m' timeout_seconds='2' /> <exec_method type='method' name='stop' exec=':kill' timeout_seconds='2'> </exec_method> <instance name='project-gate' enabled='true'> <method_context> <method_credential user='root' group='root' /> </method_context> <property_group name='hg-serve' type='application'> <propval name='path' type='astring' value='/src/project-gate'/> <propval name='port' type='astring' value='9998' /> </property_group> </instance> <stability value='Evolving' /> <template> <common_name> <loctext xml:lang='C'>hg-serve</loctext> </common_name> <documentation> <manpage title='hg' section='1' /> </documentation> </template> </service> </service_bundle> So the only things I had to decide on in this are the service name "application/network/hg-serve" the start and stop methods (more of which later) and the properties. This is the information I need to pass to the start method script. In my case the port I want to start the web server on "9998", and the path to the source gate "/src/project-gate". These can be read in to the start method. So now lets look at the method scripts $ cat /lib/svc/method/hg-serve #!/sbin/sh # # # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # Standard prolog # . /lib/svc/share/smf_include.sh if [ -z $SMF_FMRI ]; then echo "SMF framework variables are not initialized." exit $SMF_EXIT_ERR fi # # Build the command line flags # # Get the port and directory from the SMF properties port=`svcprop -c -p hg-serve/port $SMF_FMRI` dir=`svcprop -c -p hg-serve/path $SMF_FMRI` echo "$1" case "$1" in 'start') cd $dir /usr/bin/hg serve -d -p $port ;; *) echo "Usage: $0 {start|refresh|stop}" exit 1 ;; esac exit $SMF_EXIT_OK This is all pretty self explanatory, we read the port and directory using svcprop, and use those simply to run a command in the start case. We don't need to implement a stop case, as the manifest says to use "exec=':kill'for the stop method. Now all we need to do is import the manifest and start the service, but first verify the manifest # svccfg verify /path/to/hg-serve.xml If that doesn't give an error try importing it # svccfg import /path/to/hg-serve.xml If like me you originally put the hg-serve.xml file in /var/svc/manifest somewhere you'll get an error and told to restart the import service svccfg: Restarting svc:/system/manifest-import The manifest being imported is from a standard location and should be imported with the command : svcadm restart svc:/system/manifest-import # svcadm restart svc:/system/manifest-import and you're nearly done. You can look at the service using svcs -l # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled false state disabled next_state none state_time Thu May 31 16:11:47 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15749 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) And look at the interesting properties # svcprop hg-serve hg-serve/path astring /src/project-gate hg-serve/port astring 9998 ...stuff deleted.... Then simply enable the service and if every things gone right, you can point your browser at http://server:9998 and get a nice graphical log of project activity. # svcadm enable hg-serve # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled true state online next_state none state_time Thu May 31 16:18:11 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15858 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) None of this is rocket science, but a bit fiddly. Hence I thought I'd blog it. It might just be you see this in google and it clicks with you more than one of the many other blogs or how tos about it. Plus I can always refer back to it myself in 3 weeks, when I want to add another project to the server, and I've forgotten how to do it.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >