Search Results

Search found 2454 results on 99 pages for 'domains'.

Page 46/99 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Conflicts with file from package mysql-5.0.77

    - by Whiteyq
    I'm trying to install APC (Alternative PHP Cache) on a CentOs dedicated server. I've everything done apart from configuring phpize. Running :yum -y install php-devel gives me the following error file /usr/share/mysql/charsets/Index.xml from install of mysql-libs-5.1.57-1.el5.art.x86_64 conflicts with file from package mysql-5.0.77-4.el5_5.3.i386 etc etc for other languages So, i think the mysql version i have is too old & i more than likely need to upgrade mysql to version 5.1. Im reluctant to do this as a) its a live server (although only 3/4 domains) b) ive read ill read to recompile php if i upgrade To add to this i have plesk installed for managing domains & might need reinstalling/reconfiguring also. sorry for the long intro but its my first post & best to give as much info as possible, so my question is basically Is there any way i can run :yum -y install php-devel to get phpize working to complete installation of APC for the version of mysql i currently have installed? ie 5.0.77

    Read the article

  • Apache Cache with multiple CacheRoots

    - by Tobias Greitzke
    I configured Apache with a CacheRoot directory for each of my domains / virtual hosts: <VirtualHost> ServerName domain1.tld ... CacheRoot /var/www/vhosts/domain1.tld/httpdocs/cache ... </VirtualHost> <VirtualHost> ServerName domain2.tld ... CacheRoot /var/www/vhosts/domain2.tld/httpdocs/cache ... </VirtualHost> I have this up and running for quite a while and so fare it's working pretty well except that I have to empty out the cache manually every so often because htcacheclean does't know of the different directories. Now I would like to setup htcacheclean to watch over the cache directories but as fare as I understand the manual, I can only set it to one cache directory. I would like to do something like this but that doesn't work: <VirtualHost> ServerName domain1.tld ... CacheRoot /var/www/vhosts/domain1.tld/httpdocs/cache htcacheclean -n -t -p/var/www/vhosts/domain1.tld/httpdocs/cache -l1024M ... </VirtualHost> Is it even right to have multiple cache directorys or should I work with just one cache directory for all of the domains?

    Read the article

  • mod_rewrite and Apache questions

    - by John
    We have an interesting situation in relation to some help desk software that we are trying to setup. This is a web based software application that allows customers and staff to log into it and access tickets and supply updates, etc. The challenge we are having deals with the two different domains that we use and the mod_rewrite rules to make it all work with our SSL certificate that is only bound to one of the domains. I will list the use case scenarios below and the challenges that we are having. If you access http://support.domain1.com/support then it redirects fine to https://support.domain2.com/support If you access http://support.domain2.com/support then it redirects fine to https://support.domain2.com/support If you access https://support.domain1.com/support then it throws an error of "server cannot be found" If you access https://support.domain1.com/support/ after having visited https://support.domain2.com/support then you are presented with a "this connection is untrusted" error about the certificate only being valid for the domain2 domain instead of the domain1 domain name I have tried just about every mod_rewrite rule that I can think of to help make this work and I have not been able to locate the correct combination. I was curious if anyone had some ideas on how to make the redirects work correctly. In the end, we are needing all customers and staff to land at https://support.domain2.com/support regardless of the previous URL combinations that they enter, like listed above. Thanks in advance for your help with this.

    Read the article

  • Linq-to-XML query to select specific sub-element based on additional criteria

    - by BrianLy
    My current LINQ query and example XML are below. What I'd like to do is select the primary email address from the email-addresses element into the User.Email property. The type element under the email-address element is set to primary when this is true. There may be more than one element under the email-addresses but only one will be marked primary. What is the simplest approach to take here? Current Linq Query (User.Email is currently empty): var users = from response in xdoc.Descendants("response") where response.Element("id") != null select new User { Id = (string)response.Element("id"), Name = (string)response.Element("full-name"), Email = (string)response.Element("email-addresses"), JobTitle = (string)response.Element("job-title"), NetworkId = (string)response.Element("network-id"), Type = (string)response.Element("type") }; Example XML: <?xml version="1.0" encoding="UTF-8"?> <response> <response> <contact> <phone-numbers/> <im> <provider></provider> <username></username> </im> <email-addresses> <email-address> <type>primary</type> <address>[email protected]</address> </email-address> </email-addresses> </contact> <job-title>Account Manager</job-title> <type>user</type> <expertise nil="true"></expertise> <summary nil="true"></summary> <kids-names nil="true"></kids-names> <location nil="true"></location> <guid nil="true"></guid> <timezone>Eastern Time (US &amp; Canada)</timezone> <network-name>Domain</network-name> <full-name>Alice</full-name> <network-id>79629</network-id> <stats> <followers>2</followers> <updates>4</updates> <following>3</following> </stats> <mugshot-url> https://assets3.yammer.com/images/no_photo_small.gif</mugshot-url> <previous-companies/> <birth-date></birth-date> <name>alice</name> <web-url>https://www.yammer.com/domain.com/users/alice</web-url> <interests nil="true"></interests> <state>active</state> <external-urls/> <url>https://www.yammer.com/api/v1/users/1089943</url> <network-domains> <network-domain>domain.com</network-domain> </network-domains> <id>1089943</id> <schools/> <hire-date nil="true"></hire-date> <significant-other nil="true"></significant-other> </response> <response> <contact> <phone-numbers/> <im> <provider></provider> <username></username> </im> <email-addresses> <email-address> <type>primary</type> <address>[email protected]</address> </email-address> </email-addresses> </contact> <job-title>Office Manager</job-title> <type>user</type> <expertise nil="true"></expertise> <summary nil="true"></summary> <kids-names nil="true"></kids-names> <location nil="true"></location> <guid nil="true"></guid> <timezone>Eastern Time (US &amp; Canada)</timezone> <network-name>Domain</network-name> <full-name>Bill</full-name> <network-id>79629</network-id> <stats> <followers>3</followers> <updates>1</updates> <following>1</following> </stats> <mugshot-url> https://assets3.yammer.com/images/no_photo_small.gif</mugshot-url> <previous-companies/> <birth-date></birth-date> <name>bill</name> <web-url>https://www.yammer.com/domain.com/users/bill</web-url> <interests nil="true"></interests> <state>active</state> <external-urls/> <url>https://www.yammer.com/api/v1/users/1089920</url> <network-domains> <network-domain>domain.com</network-domain> </network-domains> <id>1089920</id> <schools/> <hire-date nil="true"></hire-date> <significant-other nil="true"></significant-other> </response> </response>

    Read the article

  • Virtualhost entries gets over-written when apache httpd.conf is rebuilt

    - by Amitabh
    Background: We have been trying to get a wildcard SSL working on multiple sub domains on a single dedicated address.. We have two sub domains next.my-personal-website.com and blog.my-personal-website.com Part of our strategy has been to edit the httpd.conf and add the NameVirtualHost xx.xx.144.72:443 directive and the virtualhost entries for port 443 for the subdomains there. This works good if we just edit the httpd.conf, add the entries, save it and restart the apache. The problem: But if we add a new sub domain from cpanel or we run the # /usr/local/cpanel/bin/apache_conf_distiller --update # /scripts/rebuildhttpdconf the virtualhost entries that we added manually are no more there in the newly generated httpd.conf file. Only the virtualhost entry for the main domain for port 443 that was there before we made edits to the httpd.conf is there(assuming we are not discussing virtualhost entries for port 80). I understand we need to put the new virtualhost entries in some include files as mentioned here in the cpanel documentation. But am not sure where to. So the question would be where do I put the NameVirtualHost xx.xx.144.72:443 directive and the two virtualhost directive for port 443, so that they are not overwritten when httpd.conf is rebuilt/regenerated later. Virtualhost entries: The two virtualhost entries for the subdomains are: <VirtualHost xx.xx.144.72:443> ServerName next.my-personal-website.com ServerAlias www.next.my-personal-website.com DocumentRoot /home/myguardi/public_html/next.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/next.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/next.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/next.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and <VirtualHost xx.xx.144.72:443> ServerName blog.my-personal-website.com ServerAlias www.blog.my-personal-website.com DocumentRoot /home/myguardi/public_html/blog.my-personal-website.com ServerAdmin [email protected] UseCanonicalName On CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com combined CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/blog.my-personal-website.com/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/blog.my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> and the automatically generated virtualhost entry for the main domain for port 443 is <VirtualHost xx.xx.144.72:443> ServerName my-personal-website.com ServerAlias www.my-personal-website.com DocumentRoot /home/myguardi/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-personal-website.com combined CustomLog /usr/local/apache/domlogs/my-personal-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User myguardi # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup myguardi myguardi </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup myguardi myguardi </IfModule> ScriptAlias /cgi-bin/ /home/myguardi/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-personal-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-personal-website.com.key SSLCACertificateFile /etc/ssl/certs/my-personal-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-personal-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/myguardi/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/myguardi/my-personal-website.com/*.conf" I really appreciate if somebody can tell me how to proceed on this. Thank you. Update: Include directives present are: `Include "/usr/local/apache/conf/includes/pre_main_global.conf" Include "/usr/local/apache/conf/includes/pre_main_2.conf" Include "/usr/local/apache/conf/php.conf" Include "/usr/local/apache/conf/includes/errordocument.conf" Include "/usr/local/apache/conf/modsec2.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/pre_virtualhost_2.conf" ` These are the entries that are generated before any virtualhost entry is defined. Towards the end of the httpd.conf file , the following two entries are added Include "/usr/local/apache/conf/includes/post_virtualhost_global.conf" Include "/usr/local/apache/conf/includes/post_virtualhost_2.conf" The older httpd.conf file before we added the virtualhost entries for sub domains for port 443 can be viewed here

    Read the article

  • Anunciando: Grandes Melhorias para Web Sites da Windows Azure

    - by Leniel Macaferi
    Estou animado para anunciar algumas grandes melhorias para os Web Sites da Windows Azure que introduzimos no início deste verão.  As melhorias de hoje incluem: uma nova opção de hospedagem adaptável compartilhada de baixo custo, suporte a domínios personalizados para websites hospedados em modo compartilhado ou em modo reservado usando registros CNAME e A-Records (o último permitindo naked domains), suporte para deployment contínuo usando tanto CodePlex e GitHub, e a extensibilidade FastCGI. Todas essas melhorias estão agora online em produção e disponíveis para serem usadas imediatamente. Nova Camada Escalonável "Compartilhada" A Windows Azure permite que você implante e hospede até 10 websites em um ambiente gratuito e compartilhado com múltiplas aplicações. Você pode começar a desenvolver e testar websites sem nenhum custo usando este modo compartilhado (gratuito). O modo compartilhado suporta a capacidade de executar sites que servem até 165MB/dia de conteúdo (5GB/mês). Todas as capacidades que introduzimos em Junho com esta camada gratuita permanecem inalteradas com a atualização de hoje. Começando com o lançamento de hoje, você pode agora aumentar elasticamente seu website para além desta capacidade usando uma nova opção "shared" (compartilhada) de baixo custo (a qual estamos apresentando hoje), bem como pode usar a opção "reserved instance" (instância reservada) - a qual suportamos desde Junho. Aumentar a capacidade de qualquer um desses modos é fácil. Basta clicar na aba "scale" (aumentar a capacidade) do seu website dentro do Portal da Windows Azure, escolher a opção de modo de hospedagem que você deseja usar com ele, e clicar no botão "Salvar". Mudanças levam apenas alguns segundos para serem aplicadas e não requerem nenhum código para serem alteradas e também não requerem que a aplicação seja reimplantada/reinstalada: A seguir estão mais alguns detalhes sobre a nova opção "shared" (compartilhada), bem como a opção existente "reserved" (reservada): Modo Compartilhado Com o lançamento de hoje, estamos introduzindo um novo modo de hospedagem de baixo custo "compartilhado" para Web Sites da Windows Azure. Um website em execução no modo compartilhado é implantado/instalado em um ambiente de hospedagem compartilhado com várias outras aplicações. Ao contrário da opção de modo free (gratuito), um web-site no modo compartilhado não tem quotas/limite máximo para a quantidade de largura de banda que o mesmo pode servir. Os primeiros 5 GB/mês de banda que você servir com uma website compartilhado é grátis, e então você passará a pagar a taxa padrão "pay as you go" (pague pelo que utilizar) da largura de banda de saída da Windows Azure quando a banda de saída ultrapassar os 5 GB. Um website em execução no modo compartilhado agora também suporta a capacidade de mapear múltiplos nomes de domínio DNS personalizados, usando ambos CNAMEs e A-records para tanto. O novo suporte A-record que estamos introduzindo com o lançamento de hoje oferece a possibilidade para você suportar "naked domains" (domínios nús - sem o www) com seus web-sites (por exemplo, http://microsoft.com além de http://www.microsoft.com). Nós também, no futuro, permitiremos SSL baseada em SNI como um recurso nativo nos websites que rodam em modo compartilhado (esta funcionalidade não é suportada com o lançamento de hoje - mas chagará mais tarde ainda este ano, para ambos as opções de hospedagem - compartilhada e reservada). Você paga por um website no modo compartilhado utilizando o modelo padrão "pay as you go" que suportamos com outros recursos da Windows Azure (ou seja, sem custos iniciais, e você só paga pelas horas nas quais o recurso estiver ativo). Um web-site em execução no modo compartilhado custa apenas 1,3 centavos/hora durante este período de preview (isso dá uma média de $ 9.36/mês ou R$ 19,00/mês - dólar a R$ 2,03 em 17-Setembro-2012) Modo Reservado Além de executar sites em modo compartilhado, também suportamos a execução dos mesmos dentro de uma instância reservada. Quando rodando em modo de instância reservada, seus sites terão a garantia de serem executados de maneira isolada dentro de sua própria VM (virtual machine - máquina virtual) Pequena, Média ou Grande (o que significa que, nenhum outro cliente da Windows azure terá suas aplicações sendo executadas dentro de sua VM. Somente as suas aplicações). Você pode executar qualquer número de websites dentro de uma máquina virtual, e não existem quotas para limites de CPU ou memória. Você pode executar seus sites usando uma única VM de instância reservada, ou pode aumentar a capacidade tendo várias instâncias (por exemplo, 2 VMs de médio porte, etc.). Dimensionar para cima ou para baixo é fácil - basta selecionar a VM da instância "reservada" dentro da aba "scale" no Portal da Windows Azure, escolher o tamanho da VM que você quer, o número de instâncias que você deseja executar e clicar em salvar. As alterações têm efeito em segundos: Ao contrário do modo compartilhado, não há custo por site quando se roda no modo reservado. Em vez disso, você só paga pelas instâncias de VMs reservadas que você usar - e você pode executar qualquer número de websites que você quiser dentro delas, sem custo adicional (por exemplo, você pode executar um único site dentro de uma instância de VM reservada ou 100 websites dentro dela com o mesmo custo). VMs de instâncias reservadas têm um custo inicial de $ 8 cents/hora ou R$ 16 centavos/hora para uma pequena VM reservada. Dimensionamento Elástico para Cima/para Baixo Os Web Sites da Windows Azure permitem que você dimensione para cima ou para baixo a sua capacidade dentro de segundos. Isso permite que você implante um site usando a opção de modo compartilhado, para começar, e em seguida, dinamicamente aumente a capacidade usando a opção de modo reservado somente quando você precisar - sem que você tenha que alterar qualquer código ou reimplantar sua aplicação. Se o tráfego do seu site diminuir, você pode diminuir o número de instâncias reservadas que você estiver usando, ou voltar para a camada de modo compartilhado - tudo em segundos e sem ter que mudar o código, reimplantar a aplicação ou ajustar os mapeamentos de DNS. Você também pode usar o "Dashboard" (Painel de Controle) dentro do Portal da Windows Azure para facilmente monitorar a carga do seu site em tempo real (ele mostra não apenas as solicitações/segundo e a largura de banda consumida, mas também estatísticas como a utilização de CPU e memória). Devido ao modelo de preços "pay as you go" da Windows Azure, você só paga a capacidade de computação que você usar em uma determinada hora. Assim, se o seu site está funcionando a maior parte do mês em modo compartilhado (a $ 1.3 cents/hora ou R$ 2,64 centavos/hora), mas há um final de semana em que ele fica muito popular e você decide aumentar sua capacidade colocando-o em modo reservado para que seja executado em sua própria VM dedicada (a $ 8 cents/hora ou R$ 16 centavos/hora), você só terá que pagar os centavos/hora adicionais para as horas em que o site estiver sendo executado no modo reservado. Você não precisa pagar nenhum custo inicial para habilitar isso, e uma vez que você retornar seu site para o modo compartilhado, você voltará a pagar $ 1.3 cents/hora ou R$ 2,64 centavos/hora). Isto faz com que essa opção seja super flexível e de baixo custo. Suporte Melhorado para Domínio Personalizado Web sites em execução no modo "compartilhado" ou no modo "reservado" suportam a habilidade de terem nomes personalizados (host names) associados a eles (por exemplo www.mysitename.com). Você pode associar múltiplos domínios personalizados para cada Web Site da Windows Azure. Com o lançamento de hoje estamos introduzindo suporte para registros A-Records (um recurso muito pedido pelos usuários). Com o suporte a A-Record, agora você pode associar domínios 'naked' ao seu Web Site da Windows Azure - ou seja, em vez de ter que usar www.mysitename.com você pode simplesmente usar mysitename.com (sem o prefixo www). Tendo em vista que você pode mapear vários domínios para um único site, você pode, opcionalmente, permitir ambos domínios (com www e a versão 'naked') para um site (e então usar uma regra de reescrita de URL/redirecionamento (em Inglês) para evitar problemas de SEO). Nós também melhoramos a interface do usuário para o gerenciamento de domínios personalizados dentro do Portal da Windows Azure como parte do lançamento de hoje. Clicando no botão "Manage Domains" (Gerenciar Domínios) na bandeja na parte inferior do portal agora traz uma interface de usuário personalizada que torna fácil gerenciar/configurar os domínios: Como parte dessa atualização nós também tornamos significativamente mais suave/mais fácil validar a posse de domínios personalizados, e também tornamos mais fácil alternar entre sites/domínios existentes para Web Sites da Windows Azure, sem que o website fique fora do ar. Suporte a Deployment (Implantação) contínua com Git e CodePlex ou GitHub Um dos recursos mais populares que lançamos no início deste verão foi o suporte para a publicação de sites diretamente para a Windows Azure usando sistemas de controle de código como TFS e Git. Esse recurso fornece uma maneira muito poderosa para gerenciar as implantações/instalações da aplicação usando controle de código. É realmente fácil ativar este recurso através da página do dashboard de um web site: A opção TFS que lançamos no início deste verão oferece uma solução de implantação contínua muito rica que permite automatizar os builds e a execução de testes unitários a cada vez que você atualizar o repositório do seu website, e em seguida, se os testes forem bem sucedidos, a aplicação é automaticamente publicada/implantada na Windows Azure. Com o lançamento de hoje, estamos expandindo nosso suporte Git para também permitir cenários de implantação contínua integrando esse suporte com projetos hospedados no CodePlex e no GitHub. Este suporte está habilitado para todos os web-sites (incluindo os que usam o modo "free" (gratuito)). A partir de hoje, quando você escolher o link "Set up Git publishing" (Configurar publicação Git) na página do dashboard de um website, você verá duas opções adicionais quando a publicação baseada em Git estiver habilitada para o web-site: Você pode clicar em qualquer um dos links "Deploy from my CodePlex project" (Implantar a partir do meu projeto no CodePlex) ou "Deploy from my GitHub project"  (Implantar a partir do meu projeto no GitHub) para seguir um simples passo a passo para configurar uma conexão entre o seu website e um repositório de código que você hospeda no CodePlex ou no GitHub. Uma vez que essa conexão é estabelecida, o CodePlex ou o GitHub automaticamente notificará a Windows Azure a cada vez que um checkin ocorrer. Isso fará com que a Windows Azure faça o download do código e compile/implante a nova versão da sua aplicação automaticamente.  Os dois vídeos a seguir (em Inglês) mostram quão fácil é permitir esse fluxo de trabalho ao implantar uma app inicial e logo em seguida fazer uma alteração na mesma: Habilitando Implantação Contínua com os Websites da Windows Azure e CodePlex (2 minutos) Habilitando Implantação Contínua com os Websites da Windows Azure e GitHub (2 minutos) Esta abordagem permite um fluxo de trabalho de implantação contínua realmente limpo, e torna muito mais fácil suportar um ambiente de desenvolvimento em equipe usando Git: Nota: o lançamento de hoje suporta estabelecer conexões com repositórios públicos do GitHub/CodePlex. Suporte para repositórios privados será habitado em poucas semanas. Suporte para Múltiplos Branches (Ramos de Desenvolvimento) Anteriormente, nós somente suportávamos implantar o código que estava localizado no branch 'master' do repositório Git. Muitas vezes, porém, os desenvolvedores querem implantar a partir de branches alternativos (por exemplo, um branch de teste ou um branch com uma versão futura da aplicação). Este é agora um cenário suportado - tanto com projetos locais baseados no git, bem como com projetos ligados ao CodePlex ou GitHub. Isto permite uma variedade de cenários úteis. Por exemplo, agora você pode ter dois web-sites - um em "produção" e um outro para "testes" - ambos ligados ao mesmo repositório no CodePlex ou no GitHub. Você pode configurar um dos websites de forma que ele sempre baixe o que estiver presente no branch master, e que o outro website sempre baixe o que estiver no branch de testes. Isto permite uma maneira muito limpa para habilitar o teste final de seu site antes que ele entre em produção. Este vídeo de 1 minuto (em Inglês) demonstra como configurar qual branch usar com um web-site. Resumo Os recursos mostrados acima estão agora ao vivo em produção e disponíveis para uso imediato. Se você ainda não tem uma conta da Windows Azure, você pode inscrever-se em um teste gratuito para começar a usar estes recursos hoje mesmo. Visite o O Centro de Desenvolvedores da Windows Azure (em Inglês) para saber mais sobre como criar aplicações para serem usadas na nuvem. Nós teremos ainda mais novos recursos e melhorias chegando nas próximas semanas - incluindo suporte para os recentes lançamentos do Windows Server 2012 e .NET 4.5 (habilitaremos novas imagens de web e work roles com o Windows Server 2012 e NET 4.5 no próximo mês). Fique de olho no meu blog para detalhes assim que esses novos recursos ficarem disponíveis. Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • jsp getServletContext() error

    - by Reigel
    html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Murach's Java Servlets and JSP</title> </head> <body> <%-- import packages and classes needed by the scripts --%> <%@ page import="business.*, data.*" %> <% //get parameters from the request String firstName = request.getParameter("firstName"); String lastName = request.getParameter("lastName"); String emailAddress = request.getParameter("emailAddress"); // get the real path for the EmailList.txt file ServletContext sc = this.getServletContext(); String path = sc.getRealPath("/WEB-INF/EmailList.txt"); // use regular Java objects User user = new User(firstName, lastName, emailAddress); UserIO.add(user, path); %> <h1>Thanks for joining our email list</h1> <p>Here is the information that you entered: </p> <table cellspacing="5" cellpadding="5" border="1"> <tr> <td align="right">First name:</td> <td><%= firstName %></td> </tr> <tr> <td align="right">Last name:</td> <td><%= lastName %></td> </tr> <tr> <td align="right">Email Address:</td> <td><%= emailAddress %></td> </tr> </table> <p>To enter another email address, click on the Back <br /> button in your browser or the Return button shown <br /> below.</p> <form action="index.jsp" method="post"> <input type="submit" value="Return" /> </form> </body> </html> and it's giving me this error page... Compilation of 'C:\bea\user_projects\domains\mydomain.\myserver.wlnotdelete\extract\myserver_sample01_WebContent\jsp_servlet__display_email_entry.java' failed: C:\bea\user_projects\domains\mydomain.\myserver.wlnotdelete\extract\myserver_sample01_WebContent\jsp_servlet__display_email_entry.java:140: cannot resolve symbol probably occurred due to an error in /display_email_entry.jsp line 19: ServletContext sc = this.getServletContext(); Full compiler error(s): C:\bea\user_projects\domains\mydomain.\myserver.wlnotdelete\extract\myserver_sample01_WebContent\jsp_servlet__display_email_entry.java:140: cannot resolve symbol symbol : method getServletContext () location: class jsp_servlet.__display_email_entry     ServletContext sc = this.getServletContext(); //[ /display_email_entry.jsp; Line:19]                                    ^ 1 error Thu Jun 03 15:56:09 CST 2010 any hint? I'm really new to JSP, and this is my first learning practice... can't find it by google.com.... thanks!

    Read the article

  • Java Script – Content delivery networks (CDN) can bit you in the butt.

    - by Ryan Ternier
    As much as I love the new CDN’s that Google, Microsoft and a few others have publically released, there are some strong gotchas that could come up and bite you in the ass if you’re not careful. But before we jump into that, for those that are not 100% sure what a CDN is (besides Canadian).   Content Delivery Network. A way of distributing your static content across various servers in different physical locations.  Because this static content is stored on many servers around the world, whenever a user needs to access this content, they are given the closest server to their location for this data. Already you can probably see the immediate bonuses to a system like this: Lower bandwidth Even small script files downloaded thousands of times will start to take a noticeable hit on your bandwidth meter. Less connections/hits to your web server which gives better latency If you manage many servers, you don’t need to manually update each server with scripts. A user will download a script for each website they visit. If a user is redirected to many domains/sub-domains within your web site, they might download many copies of the same file. When a system sees multiple requests from the same  domain, they will ignore the download   Those are just a handful of the many bonuses a CDN will give you. And for the average website, a CDN is great choice. Check out the following CDN links for their solutions: Google AJAX Library: http://code.google.com/apis/ajaxlibs/ Microsoft Ajax library: http://www.asp.net/ajaxlibrary/cdn.ashx The Gotcha There is always a catch. Here are some issues I found with using CDN’s that hopefully can help you make your decision. HTTP / HTTPS If you are running a website behind SSL, make sure that when you reference your CDN data that you use https:// vs. http://. If you forget this users will get a very nice message telling them that their secure connection is trying to access unsecure data. For a developer this is fairly simple, but general users will get a bit anxious when seeing this. Trusted Sites Internet Explorer has this really nifty feature that allows users to specify what sites they trust, and by some defaults IE7 only allows trusted sites to be viewed.  No problem, they set your website as trusted. But what about your CDN? If a user sets your websites to trusted, but not the CDN, they will not download those static files. This has the potential to totally break your web site. Pedantic Network Admins This alone is sometimes the killer of projects. However, always be careful when you are going to use a CDN for a professional project. If a network / security admin sees that you’re referencing an outside source, or that a call from a website might hit an outside domain.. panties will be bunched, emails will be spewed out and well, no one wants that.

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com

    - by pinaldave
    This month has been very interesting month for SQLAuthority.com we had multiple and various puzzles which everybody participated and lots of interesting conversation which we have shared. Let us start in latest puzzles and continue going down. There are few answers also posted on facebook as well. SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator This puzzle involves NULL and throws an error. The challenge is to resolve the error. There are multiple ways to resolve this error. Readers has contributed various methods. Few of them even have supplied the answer why this error is showing up. NULL are very important part of the database and if one of the column has NULL the result can be totally different than the one expected. SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers I modified script provided by friend to find greatest number between two number. My script has small bug in it. However, lots of readers have suggested better scripts. Madhivanan has written blog post on the subject over here. SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints This quiz is hosted on my friend Jacob‘s site. I have written many hints how one can tune cubes. Now one can take part here and win exciting prizes. SQL SERVER – Solution – Generating Zero Without using Any Numbers in T-SQL Madhivanan has asked very interesting question on his blog about How to Generate Zero without using Any Numbers in T-SQL. He has demonstrated various methods how one can generate Zero. I asked the same question on blog and got many interesting answers which I have shared. SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once I have to accept that this was most difficult puzzle. In this puzzle I have asked even though settings are correct, why statistics of the tables are not getting updated. In this puzzle one is tested with various concepts 1) Indexes, 2) Statistics, 3) database settings etc. There are multiple ways of solving this puzzles. It was interesting as many took interest but only few got it right. SQL SERVER – Question to You – When to use Function and When to use Stored Procedure This is rather straight forward question and not the typical puzzle. The answers from readers are great however, still there is chance of more detailed answers. SQL SERVER – Selecting Domain from Email Address I wrote on selecting domains from email addresses. Madhivanan makes puzzle out of a simple question. He wrote a follow-up post over here. In his post he writes various way how one can find email addresses from list of domains. Well, this is not a puzzle but amazing Guest Post by Feodor Georgiev who has written on subject Job Interviewing the Right Way (and for the Right Reasons). An article which everyone should read. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Weblogic 10.3.4 (PS3) nodemanager wont start?

    - by angelo.santagata
    Hi all, well Im back from Australia and one of the things which happened was Oracle announced the PS3 release of oracles SOA & Webcenter products have been released. Now I normally use pre-installed images but I always like to install the products at least once that way I get to see its installation caveats.. Here’s one. Installation on Windows 7 64bit, 64bit JVM, generic weblogic Server installer. All worked fine, EXCEPT I cant start the node manager, I get the following error <08-Feb-2011 17:16:48> <INFO> <Loading domains file: D:\products\wls1034\WLSERV~1.3\common\NODEMA~1\nodemanager.domains> <08-Feb-2011 17:16:48> <SEVERE> <Fatal error in node manager server> weblogic.nodemanager.common.ConfigException: Native version is enabled but nodemanager native library could not be loaded     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:249)     at weblogic.nodemanager.server.NMServerConfig.<init>(NMServerConfig.java:190)     at weblogic.nodemanager.server.NMServer.init(NMServer.java:182)     at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:148)     at weblogic.nodemanager.server.NMServer.main(NMServer.java:390)     at weblogic.NodeManager.main(NodeManager.java:31) Caused by: java.lang.UnsatisfiedLinkError: D:\products\wls1034\wlserver_10.3\server\native\win\32\nodemanager.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform     at java.lang.ClassLoader$NativeLibrary.load(Native Method)     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)     at java.lang.Runtime.loadLibrary0(Runtime.java:823)     at java.lang.System.loadLibrary(System.java:1028)     at weblogic.nodemanager.util.WindowsProcessControl.<init>(WindowsProcessControl.java:17)     at weblogic.nodemanager.util.ProcessControlFactory.getProcessControl(ProcessControlFactory.java:24)     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:247)     ... 5 more Ok it appears that the node manager has gotten confused and thinks this is a 32bit install of Weblogic Server whereas it is the 64bit install.. Might have been something I did, or didnt do, on installation (e.g. –d64 on the jvm command line), however the workaround is pretty easy. 1. Create a file called nodemanager.properties in %WL_HOME%\common\nodemanager on my machine it was D:\products\wls1034\wlserver_10.3\common\nodemanager 2. Add the following line to it NativeVersionEnabled=false 3. And start it up!, this will force it not to use .DLL files and use emulation/non native methods instead..  

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Sharing a session between vBulletin forum and status.net microblogging platform

    - by jaz
    Hello, I need to integrate vBulletin 4.0.3 Publishing Suite with status.net microblogging platform. The first thing I need to do is make these 2 to share 1 session so a user logged in vBulletin forums will also be logged in to status.net and vice versa. I have installed different vBulletin components under different subdomains: forums.sample.com - vBulletin forums blogs.sample.com - vBulletin blogs sample.com - vBulletin content management All of these point to the same place (.../public_html/index.php) which includes the respective php file (content.php for sample.com | blog.php for blogs.sample.com | forum.php for forums.sample.com) depending on the $_SERVER['HTTP_HOST'] I have configured vBulletin to use a single cookie.domain (.sample.com) for all of these 3 domains so visiting different domains doesn't break the session. I also have status.sample.com, which is the subdomain where status.net is installed. The subdomain configuration is different so the document_root is actually a subfolder (.../public_html/status/) in sample.com Now, can you please give me some pointers on how to make all these subdomains share a single session? I'm not sure if it helps, but as I understand, status.net does no custom session handling by default, but it is possible to turn it on so it will start storing session data in a database table called "session". Any tips will be appreciated. Thank you.

    Read the article

  • Configuring ASP.NET MVC ActionLink format with GoDaddy shared hosting

    - by Maxim Z.
    Background I have a GoDaddy shared Windows hosting plan and I'm running into a small issue with multiple domains. Many people have previously reported such an issue, but I am not interested in trying to resolve that problem altogether; all I want to accomplish is to change the format of my ActionLinks. Issue Let's say the domain that is mapped to my root hosting directory is example.com. GoDaddy forces mapping of other domains to subdirectories of the root. For example, my second domain, example1.com, is mapped to example.com/example1. I uploaded my ASP.NET MVC site to such a subdirectory, only to find that ActionLinks that are for navigation have the following format: http://example1.com/example1/Controller/Action In other words, even when I use the domain that is mapped to the subdirectory, the subdirectory is still used in the URL. However, I noticed that I can also access the same path by going to: http://example1.com/Controller/Action (leaving out the subdirectory) What I want to achieve I want to have my ActionLinks automatically drop the subdirectory, as it is not required. Is this possible without changing the ActionLinks into plain-old URLs?

    Read the article

  • java.lang.ClassNotFoundException

    - by user341493
    Hey everyone, I have a java project that I'm working on which was working until a few days ago. I'm not sure what I did to my Eclipse set-up to hose it but now I'm getting a java.lang.ClassNotFoundException when I try to run some code that accesses the google finance api. I've built a small test application that uses the google finance api on its own and that seems to work. So, I think this is a project specific problem. Any help would be greatly appreciated. Here's the stack trace: `ptolemy.kernel.util.IllegalActionException: in .RandomSearch.manager Because: com/google/common/collect/Maps at ptolemy.actor.Manager.execute(Manager.java:472) at ptolemy.actor.Manager.run(Manager.java:1119) at ptolemy.actor.Manager$3.run(Manager.java:1160) Caused by: java.lang.NoClassDefFoundError: com/google/common/collect/Maps at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:118) at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:100) at com.google.gdata.client.Service.(Service.java:546) at AtomicBroadcast.GoogleFinance.GooglePortfolioReader.fire(GooglePortfolioReader.java:108) at ptolemy.domains.de.kernel.DEDirector.fire(DEDirector.java:568) at ptolemy.actor.CompositeActor.fire(CompositeActor.java:458) at ptolemy.actor.Manager.iterate(Manager.java:714) at ptolemy.actor.Manager.execute(Manager.java:349) ... 2 more Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Maps at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) ... 10 more Caused by: java.lang.NoClassDefFoundError: com/google/common/collect/Maps at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:118) at com.google.gdata.wireformats.AltRegistry.(AltRegistry.java:100) at com.google.gdata.client.Service.(Service.java:546) at AtomicBroadcast.GoogleFinance.GooglePortfolioReader.fire(GooglePortfolioReader.java:108) at ptolemy.domains.de.kernel.DEDirector.fire(DEDirector.java:568) at ptolemy.actor.CompositeActor.fire(CompositeActor.java:458) at ptolemy.actor.Manager.iterate(Manager.java:714) at ptolemy.actor.Manager.execute(Manager.java:349) at ptolemy.actor.Manager.run(Manager.java:1119) at ptolemy.actor.Manager$3.run(Manager.java:1160) Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Maps at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) ... 10 more`

    Read the article

  • TLS with SNI in Java clients

    - by ftrotter
    There is an ongoing discussion on the security and trust working group for NHIN Direct regarding the IP-to-domain mapping problem that is created with traditional SSL. If an HISP (as defined by NHIN Direct) wants to host thousands of NHIN Direct "Health Domains" for providers, then it will an "artificially inflated cost" to have to purchase an IP for each of those domains. Because Apache and OpenSSL have recently released TLS with support for the SNI extension, it is possible to use SNI as a solution to this problem on the server side. However, if we decide that we will allow server implementations of the NHINDirect transport layer to support TLS+SNI, then we must require that all clients support SNI too. OpenSSL based clients should do this by default and one could always us stunnel to implement an TLS+SNI aware client to proxy if your given programming language SSL implementation does not support SNI. It appears that native Java applications using OpenJDK do not yet support SNI, but I cannot get a straight answer out of that project. I know that there are OpenSSL Java libraries available but I have no idea if that would be considered viable. Can you give me a "state of the art" summary of where TLS+SNI support is for Java clients? I need a Java implementers perspective on this.

    Read the article

  • Cross domain login - what to store in the database?

    - by Jenkz
    I'm working on a system which will allow me to login to the same system via various domains. (www.example.com, www.mydomain.com, sub.domain.com etc) The following threads form the basis of my research so far: Single Sign On across multiple domains Cross web domain login with .net membership What I want to happen is that If I am logged in on the master domain and I visit a page on a client domain to be automatically logged in on the client. Obviously If I am not logged in on the master, I will need to enter my username and password. Walkthrough: 1. User logs in on master site 2. User navigates to client site 3. Client site re-directs to master site to see if User is logged in. 4. If User is logged in on master, record a RFC 4122 token ID and send this back to the client site. 5. Client site then looks up the token ID in the central database and logs this user in. This might eventually end up running on more than once instance of PHP and Apache, so I can't just store: token_id, php_session_id, created Is there any problem with me storing and using this: token_id, username, hashed_password, created Which is deleted on use, or automatically after x seconds.

    Read the article

  • DNS-Based Environment Determination

    - by zvolkov
    Found the following here. The questions is: where can I find more details on how exactly implement this on Windows? Any guide or how-to anybody? Or maybe you can provide your invaluable suggestions? Specifically, how do I make so that "all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com" (I'm a dev, not a DNS specialist, but our IT Support has refused to help on this:() Use DNS Based Environment Determination for your servers. Do this by initially splitting your top level domain into a number of sub domains depending on their function, and then creating DNS Service Names in each of the sub domains pointing to the relevant server for that service. Based on the list above we would then have: * clientdb.prod.example.com for Production * clientdb.perf.example.com for Performance Testing * clientdb.qa.example.com for QA * clientdb.dev.example.com for Development Servers then resolve entries in their relevant sub domain by function. That is, all QA servers would first resolve entries in qa.example.com first and then if that lookup failed they would try example.com. This allows you to have a single configuration entry for your client database hostname (clientdb) that would resolve correctly in all environments. This technique has the added advantage of still having global services defined in a common top level domain. Here's one related (but not equivalent) SO question: http://stackoverflow.com/questions/774490/dns-resolving-based-on-client-ip This seems to be related to Providing "split horizon" DNS service. Reading that, I see that I will probably need separate DNS Server for each environment. Is this true or does Windows support some form of "tagging" the records to be visible depending on the requestor's IP? Also, cross-posted on ServerFault

    Read the article

  • ASP.NET MVC on GoDaddy Not Working (Not Primary Domain Deployment)

    - by JPrescottSanders
    I am trying to get ASP.NET MVC working on GoDaddy and I'm not having much luck. I have read the post on SO that covers the subject, but I must have a slightly different configuration or must be missing somehting along the way because the main MVC page comes up, but all links seem to fail and no amount of tweaking the URLs seems to get it to work. A little back ground. I have a single hosting plan with many domains pointed to sub folders of the main domain. Basic ASP.NET web forms pages work just fine, but of course I wanted to try and host a sample MVC site in one of these non-primary domains. You can go to the URL here. As you can see this first page comes up, but if you click on Home or About it doesn't work. Clicking on Home creates this link "http://www.jprescottsanders.com/jps/" and clicking on about creates this link "http://www.jprescottsanders.com/jps/Home/About". As you can see JPS sneaks in there, this of course is the sub folder that i place my web app files in. I would like to know if this is a MVC related issue or a GoDaddy issue. I suspect that MVC may want to sit in the root directory of the site, and when it puts the "jps" into the URLs it breaks the routing mechanisms (but this is conjecture). I know Dan said this was possible so I'm hoping he sees this and helps me get to the bottom of this deployment strategy for MVC.

    Read the article

  • Simple multilingual CMS?

    - by Christoffer
    Hi, I have been searching for a while now for a dead simple CMS with multi-language support. The ideal candidate is very lean and offers the possibility to set up different languages for different domains. It's OK if the language support is provided by a plugin/extension. For example I want example.com to point to English and example.fr should be French. With different URI-mappings for SEO. It can be developed in either of PHP, Ruby or Python and has to be open source. Any tips? Thank you EDIT / MORE DETAILS What I want is a CMS that is as simple to use and grasp for a client as Radiant is, but with tabs on each resource that can translate articles to different languages. Languages have to be able to use multiple domains, one for each language. I want to easily use the same article for more than one language as well as have articles (e.g. blog posts or news stories) that are only connected to one language. The CMS should be very light in core functionality (like Radiant, unlike Drupal/Joomla) but be easily extendable with plugins.

    Read the article

  • Maximizing the number of true concurrent / parrallel http requests in Silverlight

    - by Clems
    Hi all. I'm using SL 4 beta and my app needs to do a lot of small http requests to the server. I believe that when exceeding the number of allowed concurrent requests, the subsequent requests are put in a queue. I am also aware that SL 4 has both a http browser stack and a http client stack, with both different limit in terms of the number of concurrent requests. Let's say call those limits MAX_BROWSER and MAX_CLIENT. Also I think I read somewhere that the number of concurrent requests is limited per domain, not overall. But I'm sure if this applies to both the http client stack. That means that you CAN have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to domain2.com at the same time. And I even believe that sub domains are considered different so you can also have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to sub.domain1.com at the same time. I have ownership of the services and domain names so I could easily setup sub domains for my services. Given those considerations I'm trying to optimize the number of concurrent http requests to my server. Here are few questions ? Is is possible to use both stack at the same time ? Is the subdomain/domain story true for both stacks ? None ? If so that would mean that I could potentially have a number of concurrent requests equal to : (MAX_BROWSER + MAX_CLIENT) * NUMBER_OF_DOMAINS which would be fairly good. Is this correct ? I'm kind of sharing my morning thoughts here, hoping somebody has experimented with those things. Thank you.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >