Search Results

Search found 56525 results on 2261 pages for 'kavoir com'.

Page 143/2261 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Cannot make bind9 forward DNS query to subdomain unless recursive enabled

    - by PP.
    I am trying to develop my own dynamic DNS. I'm running my own custom DNS for the subdomain on port 5353. ASCII diagram: INET --->:53 Bind 9 --->:5353 node.js | V zone_files I have example.com. The node.js DNS is for dyn.example.com. In my /etc/bind/named.conf.local I have: zone "example.com" { type master; file "/etc/bind/db.com.example"; allow-transfer { zonetxfrsafe; }; }; zone "dyn.example.com" IN { # DYNAMIC type forward; forwarders { 127.0.0.1 port 5353; }; forward only; }; I've even gone so far as to add a NS in my example.com zone file: $TTL 86400 @ IN SOA ns.example.com. hostmaster.example.com. ( 2013070104 ; Serial 7200 ; Refresh 1200 ; Retry 2419200 ; Expire 86400 ) ; Negative Cache TTL ; NS ns ; inet of our nameserver ns A 1.2.3.4 ; NS record for subdomain dyn NS ns When I attempt to get a record from the subdomain server it doesn't get forwarded: dig @127.0.0.1 test.dyn.example.com However if I turn recursive on in /etc/bind/named.conf.options: options { recursion yes; } .. then I CAN see the request going to the subdomain server. But I don't want recursion yes; in my Bind configuration as it is poor security practice (and allows all-and-sundry requests that are not related to my managed zones). How does one forward (proxy) zone queries for just one zone? Or do I give up on Bind altogether and find a DNS server that can actually forward specific queries?

    Read the article

  • Strange Domain name under the same IP Address

    - by Mike Chip
    There's something really weird happening in my server. But first things first: I wanted to have my website and chose the domain name "myowndomain.com", Now on my domain registrar I point "myowndomain.com" to the address of my recently setup VPS, let's say 50.50.50.50 So I installed everything I needed to run my website, and I started to notice strange queries coming from different IP Addresses. Like these [client 123.123.123.123] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php [client 456.456.456.456] File does not exist: /var/www/html/api, referer: http://www.strangedomain.com/api/manyou/my.php or like this (Really a long line, I cut some things) GET /?s=vod-show-id-22-area-%E5%85%B6%E4%BB%96-language-%E9%9F%A9%E8%AF%AD.html HTTP/1.1" 301 295 "http://v.strangedomain.com/?s=vod-s ...[cut]... spider" That above is happening the most. The 'strangedomain.com' returns the same IP address of my VPS which my website is hosted on. The whois of such domain shows it's registered to a chinese. But the street name didn't look so right (like a huge single word), so I think all of that info might be fake, but still might be a chinese. I also noticed that all 'clients' trying to access the 'strangedomain.com' is coming from china. If I type in the browser 'strangedomain.com', I see my website. I'm worried, because my website is actually an e-commerce. I don't know if 'strangedomain.com' WAS a website on 50.50.50.50 in the not so far past, or if it's something else.

    Read the article

  • Apache not routing to tomcat on correct Virtual host

    - by ttheobald
    We are looking at moving from Websphere to Tomcat. I'm trying to send traffic to tomcat from apache web server based on the virtual host directives in apache web server. After some playing around I have it sort of working, but I'm noticing that if I have a JKMount directive in the first VirtualHost in apache, all virtualHosts will send to the application server. If I have the JKMount in Virtual hosts further down in the configs, then only that VirtualHost works with the request. For Example, with the configs below here are my symptoms mysite.com/Webapp1/ -- I resolve to the proper application mysite2.com/Webapp1/ -- I resolve to the proper application (bad!) mysite.com/MonitorApp/ -- I resolve to the proper application mysite2.com/MonitorApp/ -- I resolve to the proper application (bad!) mysite.com/Webapp2/ -- I DO NOT get to the app (good) mysite2.com/Webapp2/ -- I resolve to the proper application Here's what my web server virtualhosts look like. <VirtualHost 255.255.255.1:80> ServerName mysite.com ServerAlias aliasmysite.ca ##all our rewrite rules JkMount /Webapp1/* LoadBalanceWorker JKmount /MonitorApp/* LoadBalanceWorker </VirtualHost> <VirtualHost 255.255.255.2:80> ServerName mysite2.com ServerAlias aliasmysite2.ca ##all our rewrite rules JkMount /Webapp2/* LoadBalanceWorker </VirtualHost> we are running apache webserver 2.2.10 and tomcat 7.0.29 on Solaris10 I've posted an image of our architecture here. http://imgur.com/IFaA6Rh I HAVE not defined VirtualHosts on Tomcat. Based on what I've read, my understanding is that it's only needed if I'm accessing Tomcat directly. Any assistance is appreciated. Edit Here's my worker.properties. worker.list= LoadBalanceWorker,App1,App2 worker.intApp1.port=8009 worker.intApp1.host=10.15.8.8 worker.intApp1.type=ajp13 worker.intApp1.lbfactor=1 worker.intApp1.socket_timeout=30 worker.intApp1.socket_connect_timeout=5000 worker.intApp1.fail_on_status=302,500,503 worker.intApp1.recover_time=30 worker.intApp2.port=8009 worker.intApp2.host=10.15.8.9 worker.intApp2.type=ajp13 worker.intApp2.lbfactor=1 worker.intApp2.socket_timeout=30 worker.intApp2.socket_connect_timeout=5000 worker.intApp2.fail_on_status=302,500,503 worker.intApp2.recover_time=30 worker.LoadBalanceWorker.type=lb worker.LoadBalanceWorker.balanced_workers=intApp1,intApp2 worker.LoadBalanceWorker.sticky_session=1

    Read the article

  • Redirect without changing URL

    - by Coobadivin
    Here's the setup. We have a hardware load balancer with an http virtual cluster. Let's call this virtual cluster example1.com. This virtual cluster load balances between two squid reverse proxies which are also on the same physical servers as the web servers. Squid listens on 80 and points to itself as the cache_peer web server which listens on 81. We also have a standalone web server which we will call example2.com. What we are trying to do is create a subdirectory on example1.com called example1.com/example2. This will point to example2.com, but we want our users to stay at example1.com/example2 in their browser. So, it's like a redirect without actually being a redirect. How the hell do I go about doing this? Is this even possible? I'm looking at squid docs in the meantime. example1.com is running a proprietary web server - not Apache :( We can't host example2.com's content in example1.com's file system. These are two very different platforms.

    Read the article

  • Host to set up postfix to use external smtp server

    - by Leo
    I have a web server and a mail server. Both have the same domain name except, one points to mywebsite.com and the other is mail.mywebsite.com. They have different IPs. I'm trying to set up postfix on my web server so it uses my mail server as the server that sends e-mails. I followed this guide: http://www.howtoforge.com/postfix_relaying_through_another_mailserver I am getting this error in my logs: Oct 28 02:56:45 mywebsite postfix/smtp[1660]: warning: host mail.mywebsite.com[xxx.xxx.xx.xx]:25 greeted me with my own hostname mywebsite.com Oct 28 02:56:46 mywebsite postfix/smtp[1660]: warning: host mail.mywebsite.com[xxx.xxx.xx.xx]:25 replied to HELO/EHLO with my own hostname mywebsite.com I've searched around and I read that you can't use the same hostname when relaying to a separate smtp server. Is there a work around for this? Do I need to set up my mail server with a separate domain name? Also I have my MX records set up for both mywebsite.com and mail.mywebsite.com. I'm not that experienced with this so if I need to give more info let me know. Thanks!

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • Protecting Apache with Fail2Ban

    - by NetStudent
    Having checked my Apache logs for the last two days I have noticed several attempts to access URLs such as /phpmyadmin, /phpldapadmin: 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 415 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 405 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 404 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /pma/scripts/setup.php HTTP/1.1" 404 399 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:37 +0100] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 188.132.178.34 - - [09/Jun/2012:08:39:05 +0100] "HEAD /manager/html HTTP/1.0" 404 166 "-" "-" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 194.128.132.2 - - [09/Jun/2012:16:04:41 +0100] "HEAD / HTTP/1.0" 200 260 "-" "-" 66.249.68.176 - - [09/Jun/2012:18:08:12 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.68.176 - - [09/Jun/2012:18:08:13 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 212.3.106.249 - - [09/Jun/2012:18:12:33 +0100] "GET / HTTP/1.1" 200 388 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/ HTTP/1.1" 404 379 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/htdocs/ HTTP/1.1" 404 386 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:35 +0100] "GET /phpldap/ HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /phpldap/htdocs/ HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /admin/ HTTP/1.1" 404 372 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/ HTTP/1.1" 404 377 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/htdocs/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/phpldap/ HTTP/1.1" 404 380 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldap/htdocs/ HTTP/1.1" 404 387 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldapadmin/htdocs/ HTTP/1.1" 404 392 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /admin/phpldapadmin/ HTTP/1.1" 404 385 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /openldap HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:41 +0100] "GET /openldap/htdocs HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:42 +0100] "GET /openldap/htdocs/ HTTP/1.1" 404 382 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/ HTTP/1.1" 404 371 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/htdocs/ HTTP/1.1" 404 378 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:45 +0100] "GET /ldap/phpldapadmin/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:46 +0100] "GET /ldap/phpldapadmin/htdocs/ HTTP/1.1" 404 391 "-" "-" Is there any way I can use Fail2Ban or any other similar software to ban these IPs in situations when my server is being abused this way (by trying several "common" URLs)?

    Read the article

  • Programação paralela no .NET Framework 4 – Parte I

    - by anobre
    Introdução O avanço de tecnologia nos últimos anos forneceu, a baixo custo, acesso  a workstations com inúmeros CPUs. Facilmente encontramos hoje máquinas clientes com 2, 4 e até 8 núcleos, sem considerar os “super-servidores” com até 36 processadores :) Da wikipedia: A Unidade central de processamento (CPU, de acordo com as iniciais em inglês) ou o processador é a parte de um sistema de computador que executa as instruções de um programa de computador, e é o elemento primordial na execução das funções de um computador. Este termo tem sido usado na indústria de computadores pelo menos desde o início dos anos 1960[1]. A forma, desenho e implementação de CPUs têm mudado dramaticamente desde os primeiros exemplos, mas o seu funcionamento fundamental permanece o mesmo. Fazendo uma analogia, seria muito interessante delegarmos tarefas no mundo real que podem ser executadas independentemente a pessoas diferentes, atingindo desta forma uma  maior performance / produtividade na sua execução. A computação paralela se baseia na idéia que um problema maior pode ser dividido em problemas menores, sendo resolvidos de forma paralela. Este pensamento é utilizado há algum tempo por HPC (High-performance computing), e através das facilidades dos últimos anos, assim como a preocupação com consumo de energia, tornaram esta idéia mais atrativa e de fácil acesso a qualquer ambiente. No .NET Framework A plataforma .NET apresenta um runtime, bibliotecas e ferramentas para fornecer uma base de acesso fácil e rápido à programação paralela, sem trabalhar diretamente com threads e thread pool. Esta série de posts irá apresentar todos os recursos disponíveis, iniciando os estudos pela TPL, ou Task Parallel Library. Task Parallel Library A TPL é um conjunto de tipos localizados no namespace System.Threading e System.Threading.Tasks, a partir da versão 4 do framework. A partir da versão 4 do framework, o TPL é a maneira recomendada para escrever código paralelo e multithreaded. http://msdn.microsoft.com/en-us/library/dd460717(v=VS.100).aspx Task Parallelism O termo “task parallelism”, ou em uma tradução live paralelismo de tarefas, se refere a uma ou mais tarefas sendo executadas de forma simultanea. Considere uma tarefa como um método. A maneira mais fácil de executar tarefas de forma paralela é o código abaixo: Parallel.Invoke(() => TrabalhoInicial(), () => TrabalhoSeguinte()); O que acontece de verdade? Por trás nos panos, esta instrução instancia de forma implícita objetos do tipo Task, responsável por representar uma operação assíncrona, não exatamente paralela: public class Task : IAsyncResult, IDisposable É possível instanciar Tasks de forma explícita, sendo uma alternativa mais complexa ao Parallel.Invoke. var task = new Task(() => TrabalhoInicial()); task.Start(); Outra opção de instanciar uma Task e já executar sua tarefa é: var t = Task<int>.Factory.StartNew(() => TrabalhoInicialComValor());var t2 = Task<int>.Factory.StartNew(() => TrabalhoSeguinteComValor()); A diferença básica entre as duas abordagens é que a primeira tem início conhecido, mais utilizado quando não queremos que a instanciação e o agendamento da execução ocorra em uma só operação, como na segunda abordagem. Data Parallelism Ainda parte da TPL, o Data Parallelism se refere a cenários onde a mesma operação deva ser executada paralelamente em elementos de uma coleção ou array, através de instruções paralelas For e ForEach. A idéia básica é pegar cada elemento da coleção (ou array) e trabalhar com diversas threads concomitantemente. A classe-chave para este cenário é a System.Threading.Tasks.Parallel // Sequential version foreach (var item in sourceCollection) { Process(item); } // Parallel equivalent Parallel.ForEach(sourceCollection, item => Process(item)); Complicado né? :) Demonstração Acesse aqui um vídeo com exemplos (screencast). Cuidado! Apesar da imensa vontade de sair codificando, tome cuidado com alguns problemas básicos de paralelismo. Neste link é possível conhecer algumas situações. Abraços.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Hello World Pagelet

    - by astemkov
    Introduction The goal of this exercise is to give you a basic feel of how you can use Pagelet Producer to proxy a web page We will proxy a simple static Hello World web page, cut one section out of that page and present it as a pagelet that you can later insert on your own application page or to your portal page such as WebCenter Portal space or WebCenter Interaction community page. Hello World sample app This is the static web page we will work with: Let's assume the following: The Hello World web page is running on server http://appserver.company.com:1234/ The Hello World web page path is: http://appserver.company.com:1234/helloworld/ Initial Pagelet Producer setup Let's assume that the Pagelet Producer server is running on http://pageletserver.company.com:8889/pagelets/ First let's check that Pagelet Producer is up and running. In order to do that we just need to access the following URL: http://pageletserver.company.com:8889/pagelets/ And this is what should be returned: Now you can access Pagelet Producer administration screens using this URL: http://pageletserver.company.com:8889/pagelets/admin This is how the UI looks: Now if you connect to the internet via proxy server, you need to configure proxy in Pagelet Producer settings. In the Navigator pane: Jump To - Settings Click on "Proxy" Enter your proxy server configuration: Creating a resource First thing that you need to do is to create a resource for your web page. This will tell Pagelet Producer that all sub-paths of the web page should be proxied. It also will allow you to setup common rules of how your web page should be proxied and will serve as a container for your pagelets. In the Navigator pane: Jump To - Resources Click on any existing resource (ex. welcome_resource) Click on "Create selected type" toolbar button at the top of the Navigator pane Select "Web" in the "Select Producer Type" dialog box and click "OK" Now after the resource is created let's click on "General" sub-item a specify the following values Name = AppServer Source URL = http://appserver.company.com:1234/ Destination URL = /appserver/ Click on "Save" toolbar button at the top of the Navigator pane After the resource is created our web page becomes accessible by the URL: http://pageletserver.company.com:8889/pagelets/appserver/helloworld/ So in original web page address Source URL is replaced with Pagelet Producer URL (http://pageletserver.company.com:8889/pagelets) + Destination URL Creating a pagelet Now let's create "Hello World" pagelet. Under the resource node activate Pagelets subnode Click on "Create selected type" toolbar button at the top of the Navigator pane Click on "General" sub-node of newly created pagelet and specify the following values Name = Hello_World Library = MyLib Library is used for logical grouping. The portals use the "Library" value to group pagelets in their respective UI's. For example, when adding pagelets to a WebCenter Portal space you would see the individual pagelets listed under the "Library" name. URL Suffix = helloworld/index.html this is where the Hello World page html is served from Click on "Save" toolbar button at the top of the Navigator pane The Library name can be anything you want, it doesn't have to match the resource name at all. It is used as a logical grouping of pagelets, and you can include pagelets from multiple resources into the same library or create a new library for each pagelet. After you save the pagelet you can access it here: http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/MyLib/Hello_World which is : http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/ + [Library] + [Name] Or to test the injection of a pagelet into iframe you can click on the pagelets "Documentation" sub-node and use "Access Pagelet using REST" URL: This is what we will see: Clipping The pagelet that we just created covers the whole web page, but we want just the "Hello World" segment of it. So let's clip it. Under the Hello_World pagelet node activate Clipper sub-node Click on "Create selected type" toolbar button at the top of the Navigator pane Specify a Name for newly created clipper. For example: "c1" Click on "Content" sub-node of the clipper Click on "Launch Clipper" button New browser window will open By moving a mouse pointer over the web page select the area you want to clip: Click left mouse button - the browser window will disappear and you will see that Clipping Path was automatically generated Now let's save and access the link from the "Documentation" page again Here's our pagelet nicely clipped and ready for being used on your Web Center Space

    Read the article

  • Why is Apache ignoring VirtualHost directive for first name in hosts file?

    - by Peter Taylor
    Standard pre-emptive disclaimer: host names, IP addresses, and directories are anonymised. Problem We have a server with Apache 2.2 (WAMP) listening on one IP and IIS listening on another. An ASP.Net application running under IIS needs to do some simple GETs from the PHP applications running under Apache to build a unified search results page. This is a virtual server, so the internal IPs are mapped somehow to external ones. The internal DNS system doesn't resolve the publicly published names under which the applications are accessed externally, so the obvious solution was to add them to etc/hosts with the internal IP address: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com After restarting Apache, phpappone.example.com stopped working. Instead of returning pages from that app, Apache was returning pages from the default site. The other PHP apps worked fine. Relevant configuration httpd.conf, summarised, says: ServerAdmin admin@example.com ServerRoot "c:/server/Apache2" ServerName www.example.com Listen 10.0.1.17:80 Listen 10.0.1.17:443 # Not obviously related config options elided # Nothing obviously astandard # If you want more details, post a comment DocumentRoot "c:/server/Apache2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # Fallback for unknown host names <Directory "c:/server/Apache2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # PHP apps common config <Directory "C:/Inetpub/wwwroot/phpapps"> Options FollowSymLinks -Indexes +ExecCGI AllowOverride All Order Allow,Deny Allow from All </Directory> # Virtual hosts NameVirtualHost 10.0.1.17:80 NameVirtualHost 10.0.1.17:443 <VirtualHost _default_:80> </VirtualHost> <VirtualHost _default_:443> SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Include conf/vhosts/*.conf and the vhosts files are e.g. <VirtualHost 10.0.1.17:80> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" </VirtualHost> <VirtualHost 10.0.1.17:443> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Buggy behaviour or our misunderstanding? The documentation for name-based virtual hosts says that Now when a request arrives, the server will first check if it is using an IP address that matches the NameVirtualHost. If it is, then it will look at each <VirtualHost> section with a matching IP address and try to find one where the ServerName or ServerAlias matches the requested hostname. If it finds one, then it uses the configuration for that server. If no matching virtual host is found, then the first listed virtual host that matches the IP address will be used. Yet that isn't what we observe. It seems that if the hostname is the first hostname listed against the IP address in etc/hosts then it uses the configuration from the main server and skips the virtual host lookup. Workarounds The workaround we've put in place for the time being is to add a fake line to the hosts file: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 fakename.example.com 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com This fixes the problem, but it's not very elegant. In addition, it seems a bit brittle: reordering lines in the hosts file (or deleting the nonsense value) can break it. The other obvious workaround is to make the main server configuration match that of the troublesome virtual host, but that is equally brittle. A third option, which is just ugly, would be to change the ASP.Net code to take separate config items for the IP address and the hostname and to implement HTTP manually. Ugh. The question Is there a good solution to this problem which localises any "Do not touch this!" explanations to the Apache config files?

    Read the article

  • Can't connect to certain HTTPS sites

    - by mind.blank
    I've just moved to a new apartment and with internet connection via a router and I'm finding that I can't connect to quite a few sites that use SSL. For example trying to connect to PayPal: curl -v https://paypal.com * About to connect() to paypal.com port 443 (#0) * Trying 66.211.169.3... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * Unknown SSL protocol error in connection to paypal.com:443 * Closing connection #0 curl: (35) Unknown SSL protocol error in connection to paypal.com:443 curl -v -ssl https://paypal.com gives the same output. For some sites it works: curl -v https://www.google.com * About to connect() to www.google.com port 443 (#0) * Trying 74.125.235.112... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using ECDHE-RSA-RC4-SHA * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=www.google.com * start date: 2011-10-26 00:00:00 GMT * expire date: 2013-09-30 23:59:59 GMT * common name: www.google.com (matched) * issuer: C=ZA; O=Thawte Consulting (Pty) Ltd.; CN=Thawte SGC CA * SSL certificate verify ok. > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: www.google.com > Accept: */* > < HTTP/1.1 302 Found < Location: https://www.google.co.jp/ . . . I'm using Ubuntu 12.04, with Windows 7 installed as well. These sites work on Windows :( Not sure if this information helps but I ran ifconfig and got the following: eth0 Link encap:Ethernet HWaddr 1c:c1:de:bc:e2:4f inet6 addr: 2408:c3:7fff:991:686b:8d18:81b3:8dd1/64 Scope:Global inet6 addr: 2408:c3:7fff:991:1ec1:deff:febc:e24f/64 Scope:Global inet6 addr: fe80::1ec1:deff:febc:e24f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:87075 errors:0 dropped:0 overruns:0 frame:0 TX packets:54522 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:78167937 (78.1 MB) TX bytes:10016891 (10.0 MB) Interrupt:46 Base address:0x4000 eth1 Link encap:Ethernet HWaddr ac:81:12:0d:93:80 inet6 addr: fe80::ae81:12ff:fe0d:9380/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:498 TX packets:0 errors:26 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:630 errors:0 dropped:0 overruns:0 frame:0 TX packets:630 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39592 (39.5 KB) TX bytes:39592 (39.5 KB) ppp0 Link encap:Point-to-Point Protocol inet addr:180.57.228.200 P-t-P:118.23.8.175 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 RX packets:39631 errors:0 dropped:0 overruns:0 frame:0 TX packets:22391 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:43462054 (43.4 MB) TX bytes:2834628 (2.8 MB)

    Read the article

  • 12c - Invisible Columns...

    - by noreply(at)blogger.com (Thomas Kyte)
    Remember when 11g first came out and we had "invisible indexes"?  It seemed like a confusing feature - indexes that would be maintained by modifications (hence slowing them down), but would not be used by queries (hence never speeding them up).  But - after you looked at them a while, you could see how they can be useful.  For example - to add an index in a running production system, an index used by the next version of the code to be introduced later that week - but not tested against the queries in version one of the application in place now.  We all know that when you add an index - one of three things can happen - a given query will go much faster, it won't affect a given query at all, or... It will make some untested query go much much slower than it used to.  So - invisible indexes allowed us to modify the schema in a 'safe' manner - hiding the change until we were ready for it.Invisible columns accomplish the same thing - the ability to introduce a change while minimizing any negative side effects of that change.  Normally when you add a column to a table - any program with a SELECT * would start seeing that column, and programs with an INSERT INTO T VALUES (...) would pretty much immediately break (an INSERT without a list of columns in it).  Now we can add a column to a table in an invisible fashion, the column will not show up in a DESCRIBE command in SQL*Plus, it will not be returned with a SELECT *, it will not be considered in an INSERT INTO T VALUES statement.  It can be accessed by any query that asks for it, it can be populated by an INSERT statement that references it, but you won't see it otherwise.For example, let's start with a simple two column table:ops$tkyte%ORA12CR1> create table t  2  ( x int,  3    y int  4  )  5  /Table created.ops$tkyte%ORA12CR1> insert into t values ( 1, 2 );1 row created.Now, we will add an invisible column to it:ops$tkyte%ORA12CR1> alter table t add                     ( z int INVISIBLE );Table altered.Notice that a DESCRIBE will not show us this column:ops$tkyte%ORA12CR1> desc t Name              Null?    Type ----------------- -------- ------------ X                          NUMBER(38) Y                          NUMBER(38)and existing inserts are unaffected by it:ops$tkyte%ORA12CR1> insert into t values ( 3, 4 );1 row created.A SELECT * won't see it either:ops$tkyte%ORA12CR1> select * from t;         X          Y---------- ----------         1          2         3          4But we have full access to it (in well written programs! The ones that use a column list in the insert and select - never relying on "defaults":ops$tkyte%ORA12CR1> insert into t (x,y,z)                         values ( 5,6,7 );1 row created.ops$tkyte%ORA12CR1> select x, y, z from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7and when we are sure that we are ready to go with this column, we can just modify it:ops$tkyte%ORA12CR1> alter table t modify z visible;Table altered.ops$tkyte%ORA12CR1> select * from t;         X          Y          Z---------- ---------- ----------         1          2         3          4         5          6          7I will say that a better approach to this - one that is available in 11gR2 and above - would be to use editioning views (part of Edition Based Redefinition - EBR ).  I would rather use EBR over this approach, but in an environment where EBR is not being used, or the editioning views are not in place, this will achieve much the same.Read these for information on EBR:http://www.oracle.com/technetwork/issue-archive/2010/10-jan/o10asktom-172777.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-mar/o20asktom-098897.htmlhttp://www.oracle.com/technetwork/issue-archive/2010/10-may/o30asktom-082672.html

    Read the article

  • Puppet's automatically generated certificates failing

    - by gparent
    I am running a default configuration of Puppet on Debian Squeeze 6.0.4. The server's FQDN is master.example.com. The client's FQDN is client.example.com. I am able to contact the puppet master and send a CSR. I sign it using puppetca -sa but the client will still not connect. Date of both machines is within 2 seconds of Tue Apr 3 20:59:00 UTC 2012 as I wrote this sentence. This is what appears in /var/log/syslog: Apr 3 17:03:52 localhost puppet-agent[18653]: Reopening log files Apr 3 17:03:52 localhost puppet-agent[18653]: Starting Puppet client version 2.6.2 Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed Apr 3 17:03:53 localhost puppet-agent[18653]: Using cached catalog Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog; skipping run Here is some interesting output: OpenSSL client test: client:~# openssl s_client -host master.example.com -port 8140 -cert /var/lib/puppet/ssl/certs/client.example.com.pem -key /var/lib/puppet/ssl/private_keys/client.example.com.pem -CAfile /var/lib/puppet/ssl/certs/ca.pem CONNECTED(00000003) depth=1 /CN=Puppet CA: master.example.com verify return:1 depth=0 /CN=master.example.com verify error:num=7:certificate signature failure verify return:1 depth=0 /CN=master.example.com verify return:1 18509:error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error:s3_pkt.c:1102:SSL alert number 51 18509:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: client:~# master's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/master.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:28 2012 GMT Not After : Apr 2 20:01:28 2017 GMT Subject: CN=master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a9:c1:f9:4c:cd:0f:68:84:7b:f4:93:16:20:44: 7a:2b:05:8e:57:31:05:8e:9c:c8:08:68:73:71:39: c1:86:6a:59:93:6e:53:aa:43:11:83:5b:2d:8c:7d: 54:05:65:c1:e1:0e:94:4a:f0:86:58:c3:3d:4f:f3: 7d:bd:8e:29:58:a6:36:f4:3e:b2:61:ec:53:b5:38: 8e:84:ac:5f:a3:e3:8c:39:bd:cf:4f:3c:ff:a9:65: 09:66:3c:ba:10:14:69:d5:07:57:06:28:02:37:be: 03:82:fb:90:8b:7d:b3:a5:33:7b:9b:3a:42:51:12: b3:ac:dd:d5:58:69:a9:8a:ed Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 7b:2c:4f:c2:76:38:ab:03:7f:c6:54:d9:78:1d:ab:6c:45:ab: 47:02:c7:fd:45:4e:ab:b5:b6:d9:a7:df:44:72:55:0c:a5:d0: 86:58:14:ae:5f:6f:ea:87:4d:78:e4:39:4d:20:7e:3d:6d:e9: e2:5e:d7:c9:3c:27:43:a4:29:44:85:a1:63:df:2f:55:a9:6a: 72:46:d8:fb:c7:cc:ca:43:e7:e1:2c:fe:55:2a:0d:17:76:d4: e5:49:8b:85:9f:fa:0e:f6:cc:e8:28:3e:8b:47:b0:e1:02:f0: 3d:73:3e:99:65:3b:91:32:c5:ce:e4:86:21:b2:e0:b4:15:b5: 22:63 root@master:/etc/puppet# CA's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/ca.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:05 2012 GMT Not After : Apr 2 20:01:05 2017 GMT Subject: CN=Puppet CA: master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:b5:2c:3e:26:a3:ae:43:b8:ed:1e:ef:4d:a1:1e: 82:77:78:c2:98:3f:e2:e0:05:57:f0:8d:80:09:36: 62:be:6c:1a:21:43:59:1d:e9:b9:4d:e0:9c:fa:09: aa:12:a1:82:58:fc:47:31:ed:ad:ad:73:01:26:97: ef:d2:d6:41:6b:85:3b:af:70:00:b9:63:e9:1b:c3: ce:57:6d:95:0e:a6:d2:64:bd:1f:2c:1f:5c:26:8e: 02:fd:d3:28:9e:e9:8f:bc:46:bb:dd:25:db:39:57: 81:ed:e5:c8:1f:3d:ca:39:cf:e7:f3:63:75:f6:15: 1f:d4:71:56:ed:84:50:fb:5d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:TRUE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C Signature Algorithm: sha1WithRSAEncryption 1d:cd:c6:65:32:42:a5:01:62:46:87:10:da:74:7e:8b:c8:c9: 86:32:9e:c2:2e:c1:fd:00:79:f0:ef:d8:73:dd:7e:1b:1a:3f: cc:64:da:a3:38:ad:49:4e:c8:4d:e3:09:ba:bc:66:f2:6f:63: 9a:48:19:2d:27:5b:1d:2a:69:bf:4f:f4:e0:67:5e:66:84:30: e5:85:f4:49:6e:d0:92:ae:66:77:50:cf:45:c0:29:b2:64:87: 12:09:d3:10:4d:91:b6:f3:63:c4:26:b3:fa:94:2b:96:18:1f: 9b:a9:53:74:de:9c:73:a4:3a:8d:bf:fa:9c:c0:42:9d:78:49: 4d:70 root@master:/etc/puppet# Client's certificate: client:~# openssl x509 -text -noout -in /var/lib/puppet/ssl/certs/client.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:36 2012 GMT Not After : Apr 2 20:01:36 2017 GMT Subject: CN=client.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:ae:88:6d:9b:e3:b1:fc:47:07:d6:bf:ea:53:d1: 14:14:9b:35:e6:70:43:e0:58:35:76:ac:c5:9d:86: 02:fd:77:28:fc:93:34:65:9d:dd:0b:ea:21:14:4d: 8a:95:2e:28:c9:a5:8d:a2:2c:0e:1c:a0:4c:fa:03: e5:aa:d3:97:98:05:59:3c:82:a9:7c:0e:e9:df:fd: 48:81:dc:33:dc:88:e9:09:e4:19:d6:e4:7b:92:33: 31:73:e4:f2:9c:42:75:b2:e1:9f:d9:49:8c:a7:eb: fa:7d:cb:62:22:90:1c:37:3a:40:95:a7:a0:3b:ad: 8e:12:7c:6e:ad:04:94:ed:47 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 33:1f:ec:3c:91:5a:eb:c6:03:5f:a1:58:60:c3:41:ed:1f:fe: cb:b2:40:11:63:4d:ba:18:8a:8b:62:ba:ab:61:f5:a0:6c:0e: 8a:20:56:7b:10:a1:f9:1d:51:49:af:70:3a:05:f9:27:4a:25: d4:e6:88:26:f7:26:e0:20:30:2a:20:1d:c4:d3:26:f1:99:cf: 47:2e:73:90:bd:9c:88:bf:67:9e:dd:7c:0e:3a:86:6b:0b:8d: 39:0f:db:66:c0:b6:20:c3:34:84:0e:d8:3b:fc:1c:a8:6c:6c: b1:19:76:65:e6:22:3c:bf:ff:1c:74:bb:62:a0:46:02:95:fa: 83:41 client:~#

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • nginx reverse proxy cannot access apache virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Apache config: <VirtualHost xxxxxx:80 > ServerName www.<domain>.com ServerAlias www.<domain>.com <domain>.com ServerAdmin webmaster@<domain>.com DocumentRoot /home/<domain>/domains/<domain>.com/public_html ScriptAlias /cgi-bin/ /home/<domain>/domains/<domain>.com/public_html/cgi-bin/ UseCanonicalName OFF <IfModule !mod_ruid2.c> SuexecUserGroup <domain> <domain> </IfModule> <IfModule mod_ruid2.c> RMode config RUidGid <domain> <domain> RGroups apache access </IfModule> CustomLog /var/log/httpd/domains/<domain>.com.bytes bytes CustomLog /var/log/httpd/domains/<domain>.com.log combined ErrorLog /var/log/httpd/domains/<domain>.com.error.log <Directory /home/<domain>/domains/<domain>.com/public_html> Options +Includes -Indexes php_admin_flag engine ON php_admin_value sendmail_path '/usr/sbin/sendmail -t -i -f <domain>@<domain>.com' </Directory> <virtual ip address>:80 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:107) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:151) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:195) <virtual ip address>:443 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) port 443 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) <server ip>:80 is a NameVirtualHost default server localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/admin/httpd.conf:16)

    Read the article

  • How to implement multi-source XSLT mapping in 11g BPEL

    - by [email protected]
    In SOA 11g, you can create a XSLT mapper that uses multiple sources as the input. To implement a multi-source mapper, just follow the instructions below, Drag and drop a Transform Activity to a BPEL process Double-click on the Transform Activity, the Transform dialog window appears. Add source variables by clicking the Add icon and selecting the variable and part of the variable as needed. You can select multiple input variables. The first variable represents the main XML input to the XSL mapping, while additional variables that are added here are defined in the XSL mapping as input parameters. Select the target variable and its part if available. Specify the mapper file name, the default file name is xsl/Transformation_%SEQ%.xsl, where %SEQ% represents the sequence number of the mapper. Click OK, the xls file will be opened in the graphical mode. You can map the sources to the target as usual. Open the mapper source code, you will notice the variable representing the additional source payload, is defined as the input parameter in the map source spec and body<mapSources>    <source type="XSD">      <schema location="../xsd/po.xsd"/>      <rootElement name="PurchaseOrder" namespace="http://www.oracle.com/pcbpel/po"/>    </source>    <source type="XSD">      <schema location="../xsd/customer.xsd"/>      <rootElement name="Customer" namespace="http://www.oracle.com/pcbpel/Customer"/>      <param name="v_customer" />    </source>  </mapSources>...<xsl:param name="v_customer"/> Let's take a look at the BPEL source code used to execute xslt mapper. <assign name="Transform_1">            <bpelx:annotation>                <bpelx:pattern>transformation</bpelx:pattern>            </bpelx:annotation>            <copy>                <from expression="ora:doXSLTransformForDoc('xsl/Transformation_1.xsl',bpws:getVariableData('v_po'),'v_customer',bpws:getVariableData('v_customer'))"/>                <to variable="v_invoice"/>            </copy>        </assign> You will see BPEL uses ora:doXSLTransformForDoc XPath function to execute the XSLT mapper.This function returns the result of  XSLT transformation when the xslt template matching the document. The signature of this function is  ora:doXSLTransformForDoc(template,input, [paramQName, paramValue]*).Wheretemplate is the XSLT mapper nameinput is the string representation of xml input, paramQName is the parameter defined in the xslt mapper as the additional sourceparameterValue is the additional source payload. You can add more sources to the mapper at the later stage, but you have to modify the ora:doXSLTransformForDoc in the BPEL source code and make sure it passes correct parameter and its value pair that reflects the changes in the XSLT mapper.So the best practices are : create the variables before creating the mapping file, therefore you can add multiple sources when you define the transformation in the first place, which is more straightforward than adding them later on. Review ora:doXSLTransformForDoc code in the BPEL source and make sure it passes the correct parameters to the mapper.

    Read the article

  • Ouverture de la rubrique « Virtualisation », retrouvez toutes les ressources sur la virtualisation et ses différents outils

    Lancement de la nouvelle rubrique Virtualisation de Développez.com Retrouvez toutes les ressources du club des professionnels de l'informatique En réponse à la demande sans cesse croissante des professionnels IT autour des nouvelles technologies de virtualisation, nous avons le plaisir de vous annoncer l'ouverture de la nouvelle rubrique virtualisation sur Développez.com. Une seule adresse contiendra désormais toute l'actualité, les tutoriels, débats, critiques de livres sur la virtualisation en général, et bien évidemment, toutes les ressources sur les outils phares de la virtualisation comme VMware, VirtualBox de Oracle ou Hyper-V de Microso...

    Read the article

  • Deploying BAM Data Control Application to WLS server

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Typically we would test our ADF pages that use BAM Data control using integrated wls server (ADRS). If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS.unless we do that we may face runtime errors.In Development mode of WLS(Reference) For development-mode WebLogic Server, you can set the mode to OVERWRITE to test user names and passwords. You can set the mode by running setDomainEnv.cmd or setDomainEnv.sh with the following option added to the command. Add the following to the JAVA_PROPERTIES entry in the <FMW_HOME>/user_projects/domains/<yourdomain>/bin/setDomainEnv.sh file: -Djps.app.credential.overwrite.allowed=true In Production mode of WLS Enable MDS Create and/or Register your MDS repository. For more details refer this Edit adf-config.xml from your application and add the following tag <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">     <mds-config version="11.1.1.000">     <persistence-config>   <metadata-store-usages>     <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">     </metadata-store-usage>   </metadata-store-usages>   </persistence-config>           </mds-config>  </adf-mds-config>Deploy the application to WLS server after picking the appropriate repository during deployment from the MDS Repository dialog that pops up Enterprise Manager (Use these steps if using a version prior to 11gR1 PS1 release of JDeveloper) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, top dropdown select "System Mbean Browser->oracle.adf.share.connections->Server: AdminServer->Server: AdminServer->Application:<Appname>->ADFConnections"Right pane click "Operations->CreateConnection"Enter Connection Type as "BAMConnection"Enter the connection name same as the one defined in JdevClick "Invoke"Click "Return"Click on Operation->SaveNow in the ADFConnections in the navigator, select the connection just created and enter all the configuration details.Save and run the page. Enterprise Manager (Use these steps or the steps above if using 11gR1 PS1 or newer) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, click on "Application Deployment" to invoke to dropdown. In that select "ADF -> Configure ADF Connections"Select Connection Type as "BAM" from the drop downEnter Connection Type as to be the same as the one defined in JDevClick on "Create Connection". This should add a new row below under "BAM Connections"Select the new connection and click on the "Edit" icon. This should bring up a dialogSpecific appropriate values for all connection parameters - Username, password, BAM Server Host, BAM Server Port, Webtier Server Host, Webtier Server Port and BAM Webtier Protocol - and then click on OK to dismiss the dialogClick on "Apply"Run the page page.

    Read the article

  • Ruby Best Practices, de Gregory T Brown, critique par Idelways

    Idelways vous propose la critique du livre "Ruby Best Practices Increase Your Productivity - Write Better Code" [IMG]http://images-eu.amazon.com/images/P/0596523009.01.LZZZZZZZ.jpg[/IMG] Citation: Ruby Best Practices is for programmers who want to use Ruby the way Rubyists do. Written by the developer of the Ruby project Prawn (prawn.majesticseacreature.com), this concise book explains how to design beautiful APIs and domain-specific languages, w...

    Read the article

  • Flex 4 et Flash Builder 4 sont arrivés : de très nombreuses nouveautés et des performances améliorée

    Flex 4 et Flash Builder 4 disponibles Le 22 mars 2010 Michaël Chaize annonçait sur son blog l'arrivée de flex4 et de flash Builder 4 ( anciennement nommé Adobe Flex Builder ). Amélioration concernant flex 4 SDK * Nouvelle architecture de composant Spark : http://www.adobe.com/devnet/flex/art...parkintro.html * skinning : http://www.adobe.com/devnet/flex/art..._skinning.html * layouts : http://www.adobe...

    Read the article

  • Internships at Oracle &ndash; a truly multicultural experience!

    - by cristian.condurache(at)oracle.com
    Hello everybody!!! Our names are Lena and Laura, we both study in the same Grande Ecole in France, IPAG and we are about to complete our 16 week-internship in Oracle in the UK. Below a summary of our experience! My name is Lena. I am 20 years old and joined Oracle UK in September 2010 – more specifically, I joined the EMEA Graduate's Recruitment Team (EMEA stands for Europe, Middle East and Africa), and I have learned a lot about working life. It was a really good experience, which made me realize that I soon will be looking for a fulltime employee in a company in less than 3 years. I am glad to have had this first experience in Oracle. First of all because it's a very welcoming company which treats interns as employees and gives them the opportunity to show their potential. I also discovered that it is nice to work in a company where everybody knows everybody, and where the atmosphere is really good. The multicultural aspect is one of the most important and beautiful elements of Oracle. It gives you the opportunity to have contacts in many parts of the world and discover a lot of nice people. During my internship I learned a lot about Recruitment. I discovered I want to work in a Human Resources role after I graduate. I like the contact I will have with candidates and the fact that I have to be in touch with managers and understand their needs. I would be glad to work for the company in the near future. I would like to thank all my team members for welcoming me like they did. It was a real pleasure to share this experience in Oracle and in this team and I hope to return after I graduate.   Hi all! I am Laura. My wish for this internship was to focus on training of personal skills for employees and, by the same time of course, for the company’s development.... and I did it in the OTD team (EMEA Organization Talent Development Team). I could not have done something better than this! It was truly instructive. I learnt how to work in such a big international company, the values and the rules to follow and to interact and be part of the organisation. In Oracle, there are so different aspects of every department, so many possibilities in HR as well as in Finance or Sales... The jobs are very various and the employees’ cultures are also really different thanks to this international and multicultural company. I am working with OTD for the entire EMEA region, having many of my colleagues in other countries, with other cultures, other ways to work, and other ways to think... this is so inspiring! Oracle offers the best environment to learn about a job, as well as to learn about work life in such large companies. This company is about new technologies, it always goes fast, and everything changes quickly! You have to be aware of these changes and keep track of the wishes of customers. For OTD of course, these customers are the employees. Looking back I have learnt more then I would have ever thought and I know that it is what I want to do... And now I hope to come back again! I want to thank all my team for welcoming me and integrating me with such happiness. I will truly miss them!! If you have any questions related to this article feel free to contact annapia.racanelli@oracle.com. You can find our job opportunities via http://campus.oracle.com. Technorati Tags: Oracle,EMEA,Recruitment,internship,ODT,team

    Read the article

  • Dites-moi quelle est votre adresse e-mail, je vous dirai votre niveau de connaissances en informatiq

    Dites-moi quelle est votre adresse e-mail, je vous dirai votre niveau de connaissances en informatique Que l'on soit ou non client chez un FAI, nous possédons presque tous au moins une adresse e-mail. Le blog satyrique The Oatmeal s'est ainsi lancé dans l'analyse de la personnalité d'un internaute en fonction de l'adresse e-mail qu'il s'est crée. Voici les traits de caractère supposés des propriétaires de boîtes à lettres électroniques : Si vous utilisez : - Votre propre domaine (par exemple robert@martin.com ou [email protected]) : Il y a de grandes chances que vous soyez très doué en informatique et comp...

    Read the article

  • Windows Phone 7 - Développez avec Visual Studio, Silverlight et XNA, critique par Jérôme Lambert

    Bonjour, La rédaction de Developpez.com a lu pour vous l'ouvrage suivant : Windows Phone 7 - Développez avec Visual Studio, Silverlight et XNA de Julien CORIOLAND, Léonard LABAT et Florent SANTIN paru aux Editions ENI. [IMG]http://images-eu.amazon.com/images/P/2746061317.08.MZZZZZZZ.jpg[/IMG] Lire la critique entière L'avez-vous lu ? Comptez-vous le lire bientôt ? Quel est votre avis ?...

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >