Search Results

Search found 60030 results on 2402 pages for 'mark rosenberg(at)oracle com'.

Page 167/2402 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • apache2 namevirtualhost resolving wrong site

    - by joe
    Running apache 2.2.6. I'm setting up a development environment. dev and production will be hosted on the same machine, same IP address. DNS entries like prod.domain.com and dev.domain.com point to the same IP. * Imprortant: it is required that dev and prod are otherwise completely separate. Each will run it's own apache instance. Each will use it's own apache configuration. Each, prod and dev, will host http and https. I have this set up and working, but not as restrictive as I'd like. For instance, the production config: NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80 > ServerName prod.domain.com # ... etc </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com # ... etc </VirtualHost> The dev site is set up similarly, using ports 8080 and 4443. Each site works fine. But assuming both apaches are running, one can also hit "cross-site" by mistake. So, inadvertently hitting prod.domain.com:8080 successfully returns a page from the dev site. It would be much better if this failed completely. This is a bit more difficult to solve (for me) because of the need for two apache configs. If all in one, the single process would have full knowledge of everything. So, I tried to solve this with brute force, including virtual hosts for the "other" site, with something that would fail, like no access to documentroot. But apache then inexplicably finds the "wrong" virtual host. Here's the full config for production, with the dummy dev configs. NameVirtualHost *:80 NameVirtualHost *:443 # ---------------------------------------------- # DUMMY HOSTS <VirtualHost *:8080 > ServerName dev.domain.com:8080 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:4443 > ServerName dev.domain.com:4443 DocumentRoot /tmp/ <Directory /tmp/ > Order deny,allow Deny from all </Directory> </VirtualHost> # ---------------------------------------------- # REAL PRODUCTION HOSTS <VirtualHost *:80 > ServerName prod.domain.com:80 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443 > ServerName prod.domain.com:443 DocumentRoot /something/valid/ <Directory /something/valid/> Order allow,deny Allow from all </Directory> # .... other valid ssl setup </VirtualHost> Here's the strange thing. With this configuration, a prod.domain.com:80 hit succeeds. But a prod.domain.com:443 hit fails, because it finds the dev.domain.com:4443 instead. I've also tried removing the port from the ServerName, but it still doesn't work. Sorry for the long question. Hopefully this is enough information. Thanks in advance for any help.

    Read the article

  • How could I stop ssh offering a wrong key?

    - by Alvaro Maceda
    (This is a problem with ssh, not gitolite) I've configured gitolite on my home server (ubuntu 12.04 server, open-ssh). I want an special identityfile to administer the repositories, so I need to access throught ssh to my own host ussing two different identity keys. This is the content of my .ssh/config file: Host gitadmin.gammu.com User git IdentityFile /home/alvaro/.ssh/id_gitolite_mantra Host git.gammu.com User git IdentityFile /home/alvaro/.ssh/id_alvaro_mantra This is the content of my hosts file: # Git 127.0.0.1 gitadmin.gammu.com 127.0.0.1 git.gammu.com So I should be able to communicate with gitolite this way to access with the "normal" account: $ssh git.gammu.com and this way to access with the administrative account: $ssh gitadmin.gammu.com When I try to access with the normal account, all is ok: alvaro@mantra:~/.ssh$ ssh git.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to git.gammu.com closed. When I do the same with the administrative account: alvaro@mantra:~$ ssh gitadmin.gammu.com PTY allocation request failed on channel 0 hello alvaro, this is gitolite 2.2-1 (Debian) running on git 1.7.9.5 the gitolite config gives you the following access: @R_ @W_ testing Connection to gitadmin.gammu.com closed. It should show the administrative repository. If I launch ssh with verbose option: ssh -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7f7cb6c0fbc0) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7f7cb6c044d0) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 ... It's offering the key id_alvaro_mantra, and it should'nt!! The same happens when I specify the key with the -i option: ssh -i /home/alvaro/.ssh/id_gitolite_mantra -vvv gitadmin.gammu.com ... debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/alvaro/.ssh/id_alvaro_mantra (0x7fa365237f90) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365230550) debug2: key: /home/alvaro/.ssh/id_gitolite_mantra (0x7fa365231050) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/alvaro/.ssh/id_alvaro_mantra debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug3: sign_and_send_pubkey: RSA 36:b1:43:36:af:4f:00:e5:e1:39:50:7e:07:80:14:26 debug1: Authentication succeeded (publickey). ... What the hell is happening??? I'm missing something, but I can't find what. These are the contents of my home dir: -rw-rw-r-- 1 alvaro alvaro 395 nov 14 18:00 authorized_keys -rw-rw-r-- 1 alvaro alvaro 326 nov 21 10:21 config -rw------- 1 alvaro alvaro 137 nov 20 20:26 environment -rw------- 1 alvaro alvaro 1766 nov 20 21:41 id_alvaromaceda.es -rw-r--r-- 1 alvaro alvaro 404 nov 20 21:41 id_alvaromaceda.es.pub -rw------- 1 alvaro alvaro 1766 nov 14 17:59 id_alvaro_mantra -rw-r--r-- 1 alvaro alvaro 395 nov 14 17:59 id_alvaro_mantra.pub -rw------- 1 alvaro alvaro 771 nov 14 18:03 id_developer_mantra -rw------- 1 alvaro alvaro 1679 nov 20 12:37 id_dos_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:37 id_dos_pruebasgit.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:46 id_gitolite_mantra -rw-r--r-- 1 alvaro alvaro 397 nov 20 12:46 id_gitolite_mantra.pub -rw------- 1 alvaro alvaro 1675 nov 20 21:44 id_gitpruebas.es -rw-r--r-- 1 alvaro alvaro 408 nov 20 21:44 id_gitpruebas.es.pub -rw------- 1 alvaro alvaro 1679 nov 20 12:34 id_uno_pruebasgit -rw-r--r-- 1 alvaro alvaro 395 nov 20 12:34 id_uno_pruebasgit.pub -rw-r--r-- 1 alvaro alvaro 2434 nov 21 10:11 known_hosts There are a bunch of other keys which aren't offered... why id_alvaro_mantra is offered and not the other keys? I can't understand. I need some help, don't know where to look....

    Read the article

  • Troubleshooting unwanted NTP Traffic

    - by Jaxaeon
    A domain controller running Windows Server 2012 is sending NTP and NETBIOS traffic to an address that has never been configured as a time provider. The server logs give no indication that any NTP traffic is failing. The only place I see any evidence of this traffic is in pfSense system logs: (Blocked) Jun 9 08:48:50 DOMAIN 10.0.1.100:123 192.128.127.254:123 UDP (Blocked) Jun 9 08:48:53 DOMAIN 10.0.1.100:137 192.128.127.254:137 UDP As far as I can tell the NTP service is working normally otherwise: DC2.domain.com[10.0.1.101:123]: ICMP: 0ms delay NTP: -0.0131705s offset from DC1.domain.com RefID: DC1.domain.com [10.0.1.100] Stratum: 3 DC1.domain.com *** PDC ***[10.0.1.100:123]: ICMP: 0ms delay NTP: +0.0000000s offset from DC1.domain.com RefID: clock1.albyny.inoc.net [64.246.132.14] Stratum: 2 The time provider NtpClient is currently receiving valid time data from 1.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->204.2.134.163:123). The time provider NtpClient is currently receiving valid time data from 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). The time service is now synchronizing the system time with the time source 0.pool.ntp.org,0×1 (ntp.m|0x0|0.0.0.0:123->64.246.132.14:123). I've been inside and out of the NTP configuration and cannot find any reason for this traffic. Reverse DNS points the destination address to nothing.attdns.com. pinging nothing.attdns.com from the domain controller in question leads to a response from loopback (127.0.0.2) which makes my head hurt. Any ideas? EDIT1: It should probably be noted that after a dns flush, nslookup 192.128.127.254 returns nothing.attdns.com. 192.128.127.254 is not present in domain.com DNS records. The attdns.com domain is not present in cached lookups. 127.in-addr.arpa is clean of any funkyness. EDIT2: The loopback ping response from nothing.attdns.com is possibly unrelated. Machines on other networks are also displaying this behavior. EDIT3: As mentioned in the comments, I tracked the problem network adapter back to my pfSense VM hosted in esxi 5.5 (I know shame on me for virtualizing a firewall). pfSense was configured to use DC1.domain.com as its primary time provider, but upon changing it back to pool.ntp.org the problem persists. pfSense logs give no indication of NTP misconfiguration. Everywhere I can think to look this VM is identified as 10.0.1.253, so I still have no idea why it’s sending NTP requests as 192.128… Since this firewall was a temporary solution to a problem that no longer exists so I am going to decommission it. EDIT4: The queries were coming from another machine sharing the same virtual adapter as the firewall. The machine has two local adapters: one for LAN, and the other for attached hardware that uses an Ethernet connection. That hardware sits in the the mystery subnet, and the machine is broadcasting NTP requests over both adapters.

    Read the article

  • Nginx + PHP - No input file specified for 1 server block. Other server block works fine

    - by F21
    I am running Ubuntu Desktop 12.04 with nginx 1.2.6. PHP is PHP-FPM 5.4.9. This is the relevant part of my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { server_name testapp.com; root /www/app/www/; index index.php index.html index.htm; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { listen 80 default_server; root /www index index.html index.php; location ~ \.php$ { fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } } Relevant bits from php-fpm.conf: ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot chdir = /www In my hosts file, I redirect 2 domains: testapp.com and test.com to 127.0.0.1. My web files are all stored in /www. From the above settings, if I visit test.com/phpinfo.php and test.com/app/www, everything works as expected and I get output from PHP. However, if I visit testapp.com, I get the dreaded No input file specified. error. So, at this point, I pull out the log files and have a look: 2012/12/19 16:00:53 [error] 12183#0: *17 FastCGI sent in stderr: "Unable to open primary script: /www/app/www/index.php (No such file or directory)" while reading response header from upstream, client: 127.0.0.1, server: testapp.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "testapp.com" This baffles me because I have checked again and again and /www/app/www/index.php definitely exists! This is also validated by the fact that test.com/app/www/index.php works which means the file exists and the permissions are correct. Why is this happening and what are the root causes of things breaking for just the testapp.com v-host? Just an update to my investigation: I have commented out chroot and chdir in php-fpm.conf to narrow down the problem If I remove the location ~ \.php$ block for testapp.com, then nginx will send me a bin file which contains the PHP code. This means that on nginx's side, things are fine. The problem is that something must be mangling the file paths when passing it to PHP-FPM. Having said that, it is quite strange that the default_server v-host works fine because its root is /www, where as things just won't work for the testapp.com v-host because the root is /www/app/www.

    Read the article

  • ServerAlias not working

    - by Janis Peisenieks
    I have a VPS, that I have configured to host multiple websites with name based hosting. It is all good while only using example.com, and www.example.com. It also works with example.net, but when I try example.net, it reverts to my default site configuration, which just shows my default (empty) index.html page. Here's the default file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Here's a configuration for the example.com site: <VirtualHost *:80> ServerAdmin admin@example.com ServerName example.com ServerAlias www.example.com DocumentRoot /srv/www/example.com/public_html/ ErrorLog /srv/www/example.com/logs/error.log CustomLog /srv/www/example.com/logs/access.log combined <Directory /srv/www/example.com/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here is the config for the example.net site: <VirtualHost *:80> ServerAdmin [email protected] ServerName example.net ServerAlias www.example.net DocumentRoot /srv/www/example.net/public_html/ ErrorLog /srv/www/example.net/logs/error.log CustomLog /srv/www/example.net/logs/access.log combined <Directory /srv/www/example.net/public_html/> AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Where could the problem be? I believe, that there is something going wrong with the ServerAlias property. Could it be because of the way the site's are built? Because example.com is a Joomla site, and example.net is a Zend Framework site. Just in case, I'll also insert the .htaccess files for example.net, since example.com has it's disabled: example.net: SetEnv APPLICATION_ENV development RewriteRule ^(browse|config).* - [L] ErrorDocument 500 /error-docs/500.shtml SetEnv CACHE_OFFSET 2678400 <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Fri, 25 Sep 2037 19:30:32 GMT" Header unset ETag FileETag None </FilesMatch> RewriteEngine On RewriteRule ^(adm|statistics) - [L] RewriteRule ^/public/(.*)$ http://example.net/$1 [R] RewriteRule ^(.*)$ public/$1 [L] Any help would be greatly appreciated! Edit So that my question is ABSOLUTELY clear: The problem is, that one site works with both www prefix as well as without it, and the second one does not. I would like to know how to enable the second site to work with www prefix as well.

    Read the article

  • Troubleshooting sudoers via ldap

    - by dafydd
    The good news is that I got sudoers via ldap working on Red Hat Directory Server. The package is sudo-1.7.2p1. I have some LDAP/Kerberos users in an LDAP group called wheel, and I have this entry in LDAP: # %wheel, SUDOers, example.com dn: cn=%wheel,ou=SUDOers,dc=example,dc=com cn: %wheel description: Members of group wheel have access to all privileges. objectClass: sudoRole objectClass: top sudoCommand: ALL sudoHost: ALL sudoUser: %wheel So, members of group wheel have administrative privileges via sudo. This has been tested and works fine. Now, I have this other sudo privilege set up to allow members of a group called Administrators to perform two commands as the non-root owner of those commands. # %Administrators, SUDOers, example.com dn: cn=%Administrators,ou=SUDOers,dc=example,dc=com sudoRunAsGroup: appGroup sudoRunAsUser: appOwner cn: %Administrators description: Allow members of the group Administrators to run various commands . objectClass: sudoRole objectClass: top sudoCommand: appStop sudoCommand: appStart sudoCommand: /path/to/appStop sudoCommand: /path/to/appStart sudoUser: %Administrators Unfortunately, members of Administrators are still refused permission to run appStart or appStop: -bash-3.2$ sudo /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as root on host.example.com. -bash-3.2$ sudo -u appOwner /path/to/appStop [sudo] password for Aaron: Sorry, user Aaron is not allowed to execute '/path/to/appStop' as appOwner on host.example.com. /var/log/secure shows me these two sets of messages for the two attempts: Oct 31 15:02:36 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:37 host sudo: pam_krb5[1508]: TGT verified using key for 'host/host.example.com@EXAMPLE.COM' Oct 31 15:02:37 host sudo: pam_krb5[1508]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:37 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=root ; COMMAND=/path/to/appStop Oct 31 15:02:52 host sudo: pam_unix(sudo:auth): authentication failure; logname=Aaron uid=0 euid=0 tty=/dev/pts/3 ruser= rhost= user=Aaron Oct 31 15:02:52 host sudo: pam_krb5[1547]: TGT verified using key for 'host/host.example.com@EXAMPLE.COM' Oct 31 15:02:52 host sudo: pam_krb5[1547]: authentication succeeds for 'Aaron' ([email protected]) Oct 31 15:02:52 host sudo: Aaron : command not allowed ; TTY=pts/3 ; PWD=/auto/home/Aaron ; USER=appOwner; COMMAND=/path/to/appStop The questions: Does sudo have some sort of verbose or debug mode where I can actually watch it capture the sudoers privilege list and determine whether or not Aaron should have the privilege to run this command? (This question is probably independent of where the sudoers database is kept.) Does sudo work with some background mechanism that might have a log level I could turn up? Right now, I can't fix a problem I can't identify. Is this an LDAP search failure? Is this a group member matching failure? Identifying why the command fails will help me identify the fix... Next step: Recreate the privilege in /etc/sudoers, and see if it works locally... Cheers!

    Read the article

  • NGINX - CORS error affecting only Firefox

    - by wiherek
    this is an issue with Nginx that affects only firefox. I have this config: http://pastebin.com/q6Yeqxv9 upstream connect { server 127.0.0.1:8080; } server { server_name admin.example.com www.admin.example.com; listen 80; return 301 https://admin.example.com$request_uri; } server { listen 80; server_name ankieta.example.com www.ankieta.example.com; add_header Access-Control-Allow-Origin $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; return 301 https://ankieta.example.com$request_uri; } server { server_name admin.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; location / { proxy_pass http://connect; } } server { server_name ankieta.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; root /srv/limesurvey; index index.php; add_header 'Access-Control-Allow-Origin' $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; client_max_body_size 4M; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ /*.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_param SCRIPT_FILENAME /srv/limesurvey$fastcgi_script_name; # fastcgi_param HTTPS $https; fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } } this is basically an AngularJS app and a PHP app (LimeSurvey), served under two different domains by the same webserver (Nginx). AngularJS is in fact served by ConnectJS, which is proxied to by Nginx (ConnectJS listens only on localhost). In Firefox console I get this: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://ankieta.example.com/admin/remotecontrol. This can be fixed by moving the resource to the same domain or enabling CORS. which of course is annoying. Other browsers work fine (Chrome, IE). Any suggestions on this?

    Read the article

  • CodePlex Daily Summary for Tuesday, February 08, 2011

    CodePlex Daily Summary for Tuesday, February 08, 2011Popular ReleasesyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Supports YouTube's new layout Complete internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details.fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroEnhSim: EnhSim 2.3.6 BETA: 2.3.6 BETAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 ...TestApi - a library of Test APIs: TestApi v0.6: TestApi v0.6 comes with the following changes: TestApi code development has been moved to Codeplex: Moved TestApi soluton to VS 2010; Moved all source code to Codeplex. All development work is done there now. Fault Injection API: Integrated the unmanaged FaultInjectionEngine.dll COM component in the build; Cleaned up FaultInjectionEngine.dll to build at warning level 4; Implemented “FaultScope” which allows for in-process fault injection; Added automation scripts & sample program; ...AutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...Pyxis 2: Production Release: Pyxis 2.0.0.13 - Full Production Release This release of Pyxis 2 offers you a wide range of features: Launch Applications in their own threads & domains Render alpha-blended icons on the desktop Support for SD & USB drives Online App Store Dynamic & Static IP support Menus & Modals Over a dozen GUI controls File selection dialogs Folder selection dialog Application, Bootloader, and Firmware Updating Update Release Notes Much More!Microsoft All-In-One Code Framework: Sample Browser v2 (CTP Release): Sample Browser v2 (CTP Release) http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=205917MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.Finestra Virtual Desktops: 1.0: Finally the version 1.0 release! Sorry for the long delay since the last release, but I think that you'll find this release to be really smooth, really stable, and a really great enhancement to Windows. New features include: Windows 7 taskbar integration Major performance and usability improvements Redesigned look and feel New name: Finestra Better automatic updating Much faster full-screen switcher Fixes Windows 7 hotkey collisions by default Updated installerFacebook C# SDK: 5.0.2 (BETA): PLEASE TAKE A FEW MINUTES TO GIVE US SOME FEEDBACK: Facebook C# SDK Survey This is third BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. This release contains some breaking changes. Particularly with authentication. After spending time reviewing the trouble areas that people are having using th...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 3.0.0 for MVC3: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. ChangelogTargeting ASP.NET MVC 3 and .NET 4.0 Additional UpdatePriority options for generating XML sitemaps Allow to specify target on SiteMapTitleAttribute One action with multiple routes and breadcrumbs Medium Trust optimizations Create SiteMapTitleAttribute for setting parent title IntelliSense for your sitemap with MvcSiteMapSchem...patterns & practices SharePoint Guidance: SharePoint Guidance 2010 Hands On Lab: SharePoint Guidance 2010 Hands On Lab consists of six labs: one for logging, one for service location, and four for application setting manager. Each lab takes about 20 minutes to walk through. Each lab consists of a PDF document. You can go through the steps in the doc to create solution and then build/deploy the solution and run the lab. For those of you who wants to save the time, we included the final solution so you can just build/deploy the solution and run the lab.Value Injecter - object(s) to -> object mapper: 2.3: it lets you define your own convention-based matching algorithms (ValueInjections) in order to match up (inject) source values to destination values. inject from multiple sources in one InjectFrom added ConventionInjectionMobile Device Detection and Redirection: 0.1.11.11: Improvements to Beta Release The following changes have been made in version 0.1.11.11: BlackBerry Version 6 devices (such as the 9800 Torch) are now correctly identified with a dedicated handler. Android powered devices are now correctly identified. Minor change to Provider.cs to improve performance and optimise data sent to 51Degrees.mobi if the option is enabled. GC.collect is no longer called at any point. All garbage collection now happens automatically IMPORTANT CHANGES This rele...TweetSharp: TweetSharp v2.0.0.0 - Preview 10: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for trends Added support for Silverlight 4 Elevated WP7 fixes Third Party Library VersionsHammock v1.1.7: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comNew Projectsbisolu_sendus: bisolu smsBlack & Scholes: Black & Scholes OptionsCosLabs: CosLabs makes it easy to see the full capabilities of the Cosmos operating system toolkit. CosLabs has a number of experiments to find new and unique uses for Cosmos. It's developed in C#.DeployToAzure: DeployToAzure allows automating deployment of Windows Azure project and making it a part of TFS 2010 build process without using PowerShell and Azure Management CmdLets. Digital: Educational purposesDiscogsNet: DiscogsNet is a .NET library to query the Discogs.com API and parse the Discogs XML data dump files. It's developed in C# and usable from any .NET language. The API of the library is focused on ease of use and intuitiveness.DJME - The jQuery extensions for ASP.NET MVC: DJME - The jQuery extensions for ASP.NET MVC is a lightweight framework which helps you build rich user interfaces for ASP.NET MVC while enjoying great developer productivity. DokuDB: Visio AddIn to document different versions of a databaseDtsConfig Explorer: This project is a windows froms application that helps explore a SSIS dtsconfig file with an easy wayEveryDNS Service: Windows service to automatically send your current IP address to everydns.net for dynamic domains. Automatically monitors and sends IP address changes without having to be logged in. The project is built in C# targeting the .NET 4.0 framework.EvoSim - Evolution Simulation: EvoSim is a pet project to learn aspects of MVVM, WPF/Silverlight, and Parallel Processing features of .NET 4. The goal is to simulate a world filled with creatures that move, eat, reproduce, and die according to Darwinian evolutionary principles. It is tile and turn based.EZStorage: A storage library for XNA based on EasyStorage but abstracting in an XML based file list for each folder, allowing file listings on all folders and details like file creation dates which are no longer possible in XNA 4.0ForeverBell's Snake: A snake game written in VB.NET.Glauca Browser: This is a browser with a webkit core inside.Grauers SharePoint 2010 Custom MasterPage Feature: This is a SharePoint 2010 Custom MastePage feature. Custom Master and custom CSS Blog post about the project: http://www.grauers.net/archive/2011/02/07/build-a-sharepoint-2010-custom-mastepage-with-visual-studio-2010.aspxHamming Code Sample: HamCode demonstrate how the hamming code work, result of coded and decoded data. Shows the benefits of hamming method/codeHydroDesktop Mercurial Test: This is a trial version of the HydroDesktop project (hydrodesktop.codeplex.com) running on Mercurial source code repository. We first want to test whether the data update/download is happening fine before switching the official HydroDesktop website to the new system.LateBindingApi: Creates .Net Proxy Components from COM Type Libraries.Middlesex County College Library Wireless Auto Login: The Middlesex County College Library Wireless Auto Login automatically logs a computer in to the Middlesex County College (Edison, NJ) Library's WifiMobility: Mobility is a small program that reads from a text file. The text file includes everything about the program, right down to that cute little bunny on your desktop! Try Mobility today!OpenMFC: OpenMFC is opensource version of MFC for using with C/C++ compiler without MFC.Orchard Content Sharing: This Orchard module adds content sharing functionality via integration with AddThis.com sharing service.Orchard Wunder Weather Widget: This project is used to maintain the source code for the Wunder Weather widget in the Orchard gallery.Professional Audio Recorder: Professional Audio Recorder is a Audio Recorder for Windows Phone 7Python Node Info for Umbraco: Designed to assist Umbraco macro authors who are writing their macros in Python. Provides a helper macro that, when inserted into a page, will enable the display of an info panel showing the page node properties as represented in Python.Softina.Graphs: Softina.Graphs is a graph managment and algorithm utility. It includes a set of libraries to work with graph processing. It is written using .net framework with c# language and MS Visual Studio 2008. Client applications is created using devexpress components.spangesharp: Application to help people learn c#Visual Studio Private Extension Gallery: Add a new tab in the Visual Studio extension manager to manage private extensions.WCF Home Framework: Home Framework is a service-oriented framework able to facilitate deploy and management of wcf services in a mid-size scenario project.Windows Phone 7 Video Player: This project contains all the source of an application for Windows Phone 7 to consume an RSS Media exposed by our Smooth Streaming Video Player plugin for WordPress (http://smooth.codeplex.com).WOL Shopping List: This is a shoppingList version of WOL's NearMeWP7RSSReader: Updated RSSReader project for Windows Phone 7 using the RTM tools with the October 2010 update.Xen (XNA Extended) Framework: The Xen Framework is a set of libraries that provides and extends XNA 4.0 functionality to make game development easier and faster while producing clean, maintainable code. Xen frees you up to spend more time building your game.

    Read the article

  • NGINX MIME TYPE

    - by justanotherprogrammer
    I have my nginx conf file so that when ever a mobile device visits my site the url gets rewritten to m.mysite.com I did it by adding the following set $mobile_rewrite do_not_perform; if ($http_user_agent ~* "android.+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|symbian|treo|up\.(browser|link)|vodafone|wap|windows (ce|phone)|xda|xiino") { set $mobile_rewrite perform; } if ($http_user_agent ~* "^(1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|e\-|e\/|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(di|rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|xda(\-|2|g)|yas\-|your|zeto|zte\-)") { set $mobile_rewrite perform; } if ($mobile_rewrite = perform) { rewrite ^ http://m.mywebsite.com redirect; break; } I got it from http://detectmobilebrowsers.com/ IT WORKS.But none of my images/js/css files load only the HTML. And I know its the chunk of code I mentioned above because when I remove it and visit m.mywebsite.com from my mobile device everything loads up.So this bit of code does SOMETHING to my css/img/js MIME TYPES. I found this out through the the console error messages from safari with the user agent set to iphone. text.cssResource interpreted as stylesheet but transferred with MIME type text/html. 960_16_col.cssResource interpreted as stylesheet but transferred with MIME type text/html. design.cssResource interpreted as stylesheet but transferred with MIME type text/html. navigation_menu.cssResource interpreted as stylesheet but transferred with MIME type text/html. reset.cssResource interpreted as stylesheet but transferred with MIME type text/html. slide_down_panel.cssResource interpreted as stylesheet but transferred with MIME type text/html. myrealtorpage_view.cssResource interpreted as stylesheet but transferred with MIME type text/html. head.jsResource interpreted as script but transferred with MIME type text/html. head.js:1SyntaxError: Parse error isaac:208ReferenceError: Can't find variable: head mrp_home_icon.pngResource interpreted as image but transferred with MIME type text/html. M_1_L_289_I_499_default_thumb.jpgResource interpreted as image but transferred with MIME type text/html. M_1_L_290_I_500_default_thumb.jpgResource interpreted as image but transferred with MIME type text/html. M_1_default.jpgResource interpreted as image but transferred with MIME type text/html. default_listing_image.pngResource interpreted as image but transferred with MIME type text/html. here is my whole nginx conf file just incase... worker_processes 1; events { worker_connections 1024; } http { include mime.types; include /etc/nginx/conf/fastcgi.conf; default_type application/octet-stream; sendfile on; keepalive_timeout 65; #server1 server { listen 80; server_name mywebsite.com www.mywebsite.com ; index index.html index.htm index.php; root /srv/http/mywebsite.com/public; access_log /srv/http/mywebsite.com/logs/access.log; error_log /srv/http/mywebsite.com/logs/error.log; #---------------- For CodeIgniter ----------------# # canonicalize codeigniter url end points # if your default controller is something other than "welcome" you should change the following if ($request_uri ~* ^(/main(/index)?|/index(.php)?)/?$) { rewrite ^(.*)$ / permanent; } # removes trailing "index" from all controllers if ($request_uri ~* index/?$) { rewrite ^/(.*)/index/?$ /$1 permanent; } # removes trailing slashes (prevents SEO duplicate content issues) if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } # unless the request is for a valid file (image, js, css, etc.), send to bootstrap if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?/$1 last; break; } #---------------------------------------------------# #--------------- For Mobile Devices ----------------# set $mobile_rewrite do_not_perform; if ($http_user_agent ~* "android.+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|symbian|treo|up\.(browser|link)|vodafone|wap|windows (ce|phone)|xda|xiino") { set $mobile_rewrite perform; } if ($http_user_agent ~* "^(1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|e\-|e\/|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(di|rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|xda(\-|2|g)|yas\-|your|zeto|zte\-)") { set $mobile_rewrite perform; } if ($mobile_rewrite = perform) { rewrite ^ http://m.mywebsite.com redirect; #rewrite ^(.*)$ $scheme://mywebsite.com/mobile/$1; #return 301 http://m.mywebsite.com; #break; } #---------------------------------------------------# location / { index index.html index.htm index.php; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/conf/fastcgi_params; } }#sever1 #server 2 server { listen 80; server_name m.mywebsite.com; index index.html index.htm index.php; root /srv/http/mywebsite.com/public; access_log /srv/http/mywebsite.com/logs/access.log; error_log /srv/http/mywebsite.com/logs/error.log; #---------------- For CodeIgniter ----------------# # canonicalize codeigniter url end points # if your default controller is something other than "welcome" you should change the following if ($request_uri ~* ^(/main(/index)?|/index(.php)?)/?$) { rewrite ^(.*)$ / permanent; } # removes trailing "index" from all controllers if ($request_uri ~* index/?$) { rewrite ^/(.*)/index/?$ /$1 permanent; } # removes trailing slashes (prevents SEO duplicate content issues) if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } # unless the request is for a valid file (image, js, css, etc.), send to bootstrap if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?/$1 last; break; } #---------------------------------------------------# location / { index index.html index.htm index.php; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/conf/fastcgi_params; } }#sever2 }#http I could just detect the mobile browsers with php or javascript but i need to make the detection at the server level so that i can use the 'm' in m.mywebsite.com as a flag in my controllers (codeigniter) to serve up the right view. I hope someone can help me! Thank you!

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Is RTD Stateless or Stateful?

    - by [email protected]
    Yes.   A stateless service is one where each request is an independent transaction that can be processed by any of the servers in a cluster.  A stateful service is one where state is kept in a server's memory from transaction to transaction, thus necessitating the proper routing of requests to the right server. The main advantage of stateless systems is simplicity of design. The main advantage of stateful systems is performance. I'm often asked whether RTD is a stateless or stateful service, so I wanted to clarify this issue in depth so that RTD's architecture will be properly understood. The short answer is: "RTD can be configured as a stateless or stateful service." The performance difference between stateless and stateful systems can be very significant, and while in a call center implementation it may be reasonable to use a pure stateless configuration, a web implementation that produces thousands of requests per second is practically impossible with a stateless configuration. RTD's performance is orders of magnitude better than most competing systems. RTD was architected from the ground up to achieve this performance. Features like automatic and dynamic compression of prediction models, automatic translation of metadata to machine code, lack of interpreted languages, and separation of model building from decisioning contribute to achieving this performance level. Because  of this focus on performance we decided to have RTD's default configuration work in a stateful manner. By being stateful RTD requests are typically handled in a few milliseconds when repeated requests come to the same session. Now, those readers that have participated in implementations of RTD know that RTD's architecture is also focused on reducing Total Cost of Ownership (TCO) with features like automatic model building, automatic time windows, automatic maintenance of database tables, automatic evaluation of data mining models, automatic management of models partitioned by channel, geography, etcetera, and hot swapping of configurations. How do you reconcile the need for a low TCO and the need for performance? How do you get the performance of a stateful system with the simplicity of a stateless system? The answer is that you make the system behave like a stateless system to the exterior, but you let it automatically take advantage of situations where being stateful is better. For example, one of the advantages of stateless systems is that you can route a message to any server in a cluster, without worrying about sending it to the same server that was handling the session in previous messages. With an RTD stateful configuration you can still route the message to any server in the cluster, so from the point of view of the configuration of other systems, it is the same as a stateless service. The difference though comes in performance, because if the message arrives to the right server, RTD can serve it without any external access to the session's state, thus tremendously reducing processing time. In typical implementations it is not rare to have high percentages of messages routed directly to the right server, while those that are not, are easily handled by forwarding the messages to the right server. This architecture usually provides the best of both worlds with performance and simplicity of configuration.   Configuring RTD as a pure stateless service A pure stateless configuration requires session data to be persisted at the end of handling each and every message and reloading that data at the beginning of handling any new message. This is of course, the root of the inefficiency of these configurations. This is also the reason why many "stateless" implementations actually do keep state to take advantage of a request coming back to the same server. Nevertheless, if the implementation requires a pure stateless decision service, this is easy to configure in RTD. The way to do it is: Mark every Integration Point to Close the session at the end of processing the message In the Session entity persist the session data on closing the session In the session entity check if a persisted version exists and load it An excellent solution for persisting the session data is Oracle Coherence, which provides a high performance, distributed cache that minimizes the performance impact of persisting and reloading the session. Alternatively, the session can be persisted to a local database. An interesting feature of the RTD stateless configuration is that it can cope with serializing concurrent requests for the same session. For example, if a web page produces two requests to the decision service, these requests could come concurrently to the decision services and be handled by different servers. Most stateless implementation would have the two requests step onto each other when saving the state, or fail one of the messages. When properly configured, RTD will make one message wait for the other before processing.   A Word on Context Using the context of a customer interaction typically significantly increases lift. For example, offer success in a call center could double if the context of the call is taken into account. For this reason, it is important to utilize the contextual information in decision making. To make the contextual information available throughout a session it needs to be persisted. When there is a well defined owner for the information then there is no problem because in case of a session restart, the information can be easily retrieved. If there is no official owner of the information, then RTD can be configured to persist this information.   Once again, RTD provides flexibility to ensure high performance when it is adequate to allow for some loss of state in the rare cases of server failure. For example, in a heavy use web site that serves 1000 pages per second the navigation history may be stored in the in memory session. In such sites it is typical that there is no OLTP that stores all the navigation events, therefore if an RTD server were to fail, it would be possible for the navigation to that point to be lost (note that a new session would be immediately established in one of the other servers). In most cases the loss of this navigation information would be acceptable as it would happen rarely. If it is desired to save this information, RTD would persist it every time the visitor navigates to a new page. Note that this practice is preferred whether RTD is configured in a stateless or stateful manner.  

    Read the article

  • SSH problems (ssh_exchange_identification: read: Connection reset by peer)

    - by kSiR
    I was running 11.10 and decided to do the full upgrade and come up to 12.04 after the update SSH (not SSHD) is now misbehaving when attempting to connect to other OpenSSH instances. I say OpenSSH as I am running a DropBear sshd on my router and I am able to connect to it. When attempting to connect to an OpenSSH server risk@skynet:~/.ssh$ ssh -vvv risk@someserver OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/risk/.ssh/config debug3: key names ok: [ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss] debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to someserver [someserver] port 22. debug1: Connection established. debug1: identity file /home/risk/.ssh/id_rsa type -1 debug1: identity file /home/risk/.ssh/id_rsa-cert type -1 debug1: identity file /home/risk/.ssh/id_dsa type -1 debug1: identity file /home/risk/.ssh/id_dsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/home/risk/.ssh/id_ecdsa" as a RSA1 public key debug1: identity file /home/risk/.ssh/id_ecdsa type 3 debug1: Checking blacklist file /usr/share/ssh/blacklist.ECDSA-521 debug1: Checking blacklist file /etc/ssh/blacklist.ECDSA-521 debug1: identity file /home/risk/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: read: Connection reset by peer risk@skynet:~/.ssh$ DropBear instance risk@skynet:~/.ssh$ ssh -vvv root@darkness OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/risk/.ssh/config debug3: key names ok: [ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss] debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to darkness [192.168.1.1] port 22. debug1: Connection established. debug1: identity file /home/risk/.ssh/id_rsa type -1 debug1: identity file /home/risk/.ssh/id_rsa-cert type -1 debug1: identity file /home/risk/.ssh/id_dsa type -1 debug1: identity file /home/risk/.ssh/id_dsa-cert type -1 debug3: Incorrect RSA1 identifier debug3: Could not load "/home/risk/.ssh/id_ecdsa" as a RSA1 public key debug1: identity file /home/risk/.ssh/id_ecdsa type 3 debug1: Checking blacklist file /usr/share/ssh/blacklist.ECDSA-521 debug1: Checking blacklist file /etc/ssh/blacklist.ECDSA-521 debug1: identity file /home/risk/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version dropbear_0.52 debug1: no match: dropbear_0.52 ... I have googled and ran most ALL fixes recommend both from the Debian and Arch sides and none of them seem to resolve my issue. Any ideas?

    Read the article

  • Learning resources for working with POCO entities in EF 4.0

    - by boghydan
    Here are some links that can help you start working with POCO entities in EF 4.0: ADO.NET Team blog: - Working with POCO objects: http://blogs.msdn.com/adonet/archive/2009/05/21/poco-in-the-entity-framework-part-1-the-experience.aspx - Proxy objects for POCO entities: http://blogs.msdn.com/adonet/archive/2009/12/22/poco-proxies-part-1.aspx MSDN Library: - Working with POCO  http://msdn.microsoft.com/en-us/library/dd456853.aspx - T4 editor for POCO generator can be downloaded from here: http://visualstudiogallery.msdn.microsoft.com/en-us/60297607-5fd4-4da4-97e1-3715e90c1a23

    Read the article

  • Oracle’s Vision for the Social-Enabled Enterprise

    - by Richard Lefebvre
    2 years ago, Social was a nice to have. Now it’s a must-have’- Mark Hurd .Do you agree? Check out  the on demand version of the Oracle’s Vision for the Social-Enabled Enterprise Exclusive Webcast in a 30' video HERE  Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle's vision for the social-enabled enterprise

    Read the article

  • LINQ to Twitter v2.0.8 Released

    - by Joe Mayo
    Today, I released LINQ to Twitter v2.0.8. Besides normal maintenance, this release includes the Twitter Geo API and the Suggested Users API. LINQ to Twitter is hosted on CodePlex.com: http://linqtotwitter.codeplex.com/ In addition to new functionality, I've made much progress toward LINQ to Twitter documentation; primarily in the Making API Calls area: http://linqtotwitter.codeplex.com/wikipage?title=Making%20API%20Calls&referringTitle=Documentation There's also a discussion forum where you can ask and view questions: http://linqtotwitter.codeplex.com/Thread/List.aspx As always, constructive feedback is welcome. Joe

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-20

    - by Bob Rhubart
    SOA! SOA! SOA!; OSB 11g Recipes and Author Interviews www.oracle.com Featured this week on the OTN Architect Homepage, along with the latest articles, white papers, blogs, events, and other resources for software architects. OTN Virtual Developer Day - Java - APAC Tuesday March 27th, 2012. 9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT / 3.00pm - 7.30pm AEDT Oracle Virtualization Newsletter - March Edition www.oracle.com News, white papers, webcasts, events, blogs, and more -- all focused on Oracle Virtualization products. 7 Signs an Enterprise is getting the post-PC thing | Ron Tolido www.capgemini.com Capgemini's Ron Tolido shares "indicators for enterprises that actually understand the power of mobility and the post-PC era." Gartner: Personal Cloud Will Replace the Personal Computer as the Center of Users' Digital Lives www.gartner.com The change, says Gartner, "will require enterprises to fundamentally rethink how they deliver applications and services to users." Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Oracle Hardware Systems: The Extreme Performance Tour - Dates and Locations Worldwide www.oracle.com Get the inside track on Oracle's hardware strategy and product roadmap from the people who know Oracle hardware best. And be sure to meet our global experts in the Extreme Performance exhibition area. Click the link for dates and locations worldwide. Oracle's ZFS Storage Appliance Simulator | Steen Schmidt blogs.oracle.com Take a test drive. Oracle Access Manager 11g - useful links | Dmitry Nefedkin blogs.oracle.com Dmitry Nefedkin shares a list of links to useful resources for those interested in Oracle Access Manager 11g. Oracle Linux Online Forum - March 27 event.on24.com Date: Tuesday, March 27, 2012 Time: 9:30 AM PT / 12:30 PM ET Leading Innovations in Enterprise Linux hosted by Oracle Executives Edward Screven and Wim Coekaerts. Customer Presentation: How Oracle Helps Reduce Cost and Improve Performance of Database Applications at Progressive Insurance Speaker: John Dome What's New in Oracle Linux Speakers: Waseem Daher, Chris Mason, Elena Zannoni, Lenz Grimmer Get More Value from your Linux Vendor Speakers: Sergio Leunissen, Chris Mason, Monica Kumar Thought for the Day "I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated." —Poul Anderson

    Read the article

  • How to resolve the "Failed to download repository" error?

    - by jojo
    I've tried to update but then I got this error Here's the error information in that box: W:GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key <[email protected]>, W:Failed to fetch http://ppa.launchpad.net/unity-team/hud/ubuntu/dists/precise/main/source/Sources 404 Not Found, W:Failed to fetch http://ppa.launchpad.net/unity-team/hud/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found, E:Some index files failed to download. They have been ignored, or old ones used instead. And my Internet connection is working fine. After trying $ sudo apt-get install ppa-purge Reading package lists... Done Building dependency tree Reading state information... Done ppa-purge is already the newest version. The following packages were automatically installed and are no longer required: language-pack-zh-hans language-pack-kde-en language-pack-kde-zh-hans language-pack-kde-en-base kde-l10n-engb kde-l10n-zhcn language-pack-zh-hans-base firefox-locale-zh-hans language-pack-kde-zh-hans-base Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. $ sudo ppa-purge ppa:unity-team/hud Updating packages lists W: GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key <[email protected]> W: Failed to fetch http://ppa.launchpad.net/unity-team/hud/ubuntu/dists/precise/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/unity-team/hud/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Warning: apt-get update failed for some reason PPA to be removed: unity-team hud Warning: Could not find package list for PPA: unity-team hud I've tried $ sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 16126D3A3E5C1192 Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.v6Ucus0B10 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --recv-keys --keyserver keyserver.ubuntu.com 16126D3A3E5C1192 gpg: requesting key 3E5C1192 from hkp server keyserver.ubuntu.com gpg: key 3E5C1192: "Ubuntu Extras Archive Automatic Signing Key <[email protected]>" not changed gpg: Total number processed: 1 gpg: unchanged: 1 And then started the same procedure but of no use.

    Read the article

  • Oracle Magazine - Deriving and Sharing Business Intelligence Metadata

    - by David Allan
    There is a new Oracle Magazine article titled 'Deriving and Sharing Business Intelligence Metadata' from Oracle ACE director Mark Rittman in the July/August 2010 issue that illustrates the business definitions derived and shared across OWB 11gR2 and OBIEE: http://www.oracle.com/technology/oramag/oracle/10-jul/o40bi.html Thanks to Mark for the time producing this. As for OWB would be have been useful to have had the reverse engineering capabilities from OBIEE, interesting to have had code template based support for deployment of such business definitions and powerful to use these objects (logical folders etc.) in the mapping itself.

    Read the article

  • Oracle's Sun ZFS Storage Appliances and Oracle VM

    - by uwes
    Oracle's Sun ZFS Storage Appliance Is the Optimal Platform for Deploying Consolidated Applications in an Oracle Virtual Machine (OVM) Environment Unsurpassed Integration - Oracle VM and Storage Engineering teams provide seamless integration points and an Oracle VM Connect Plug-In for Sun ZFS Storage Appliance in FC, NFS, and iSCSI Environments.  And Sun ZFS Storage is engineered and tested to work with Oracle VM agility features including Live (VM) Migration and oracle RAC Live Migration. More information could befound under the following links: ZFS Storage Appliance Server Virtualization Oracle.com page ZFS Storage Appliance Oracle.com page ZFS Storage Appliance Oracle Technical Network.com page Software download support.oracle.com page

    Read the article

  • Content Locking network with Microsoft Web Development tools? [closed]

    - by Jose Garcia
    I want my team to develop a content locking network for a client. But we dont know what is need to devlop such a network. We need to code it from Scratch. Please give us some advice. Content locking networks http://www.blamads.com/ adworkmedia.com These two networks are made with MS Tools it seems. FAQ http://support.blamads.com/index.php?pg=kb.book&id=5 https://www.adworkmedia.com/cpa-network-features.php

    Read the article

  • How to support email subscriptions to many rss feeds

    - by peter
    I am interested in having the option to be able to subscribe to any of my RSS feeds by email, without having to manage any of the email lists. Are there any email delivery services that allow easy subscription to arbitrary feeds? MailChimp's api doesn't allow list creation. The closest I can come up with is linking people to a google alert: http://www.google.com/alerts?q=site:mysite.com/category/food http://www.google.com/alerts?q=site:mysite.com/category/drinks

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-11

    - by Bob Rhubart
    Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 OTN Architect Day - Reston, VA - May 16 www.oracle.com The live one-day event in Reston, VA brings together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s solution architects regularly face. Registration is free, but seating is limited. InfoQ: Seven Secrets Every Architect Should Know www.infoq.com Frank Buschmann’s secrets: User Tasks-based Design, Be Minimalist, Ensure Visibility of Domain Concepts, Use Uncertainty as a Driver, Design Between Things, Check Assumptions, Eat Your Own Dog Food. Roadmaps for the IT shop’s evolution | Andy Mulholland www.capgemini.com Andy Mulholland discusses "the challenge of new technology and the disruptive change it brings, together with the needs to understand and plan, or even try to gain control of end-users implementations." Drive Online Engagement with Intuitive Portals and Websites | Kellsey Ruppel blogs.oracle.com "The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance," says WebCenter blogger Kellsey Ruppel. "And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more." New Exadata Customer Cases | Javier Puerta blogs.oracle.com Javier Puerta shares links to four new customer use cases featuring details on the solutions implemented at each of these sizable companies. Invoicing: It's time to catch up! | Jesper Mol www.nl.capgemini.com Capgemini's Jesper Mol diagrams an e-invoicing solution that includes Oracle Service Bus. Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri blogs.oracle.com Shub Lahiri shares a brief overview outlining the steps required to build such a simple project with Oracle Service Bus 11g and SAP Adapter for the PS3 release. Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Thought for the Day "Today, most software exists, not to solve a problem, but to interface with other software." — I. O. Angell

    Read the article

  • Different robots.txt for two different domains point to same folder

    - by Ali
    Hi, I have the following two domains: domain.com test.domain.com Both point to same folder which is "public_html". What I want is a different robots.txt file for each domain. So when someone browse domain.com/robots.txt then a different file is shown. And when someone go to test.domain.com/robots.txt then a different file is shown. How can I do this using URL rewriting in .htacces? Thanks

    Read the article

  • 10 Weeks of Gift Ideas – All Offers Good Through January 19, 2012

    - by TATWORTH
    O'Reilly are offering a series of good offers through to Jan 19, 2012. The main page is at http://shop.oreilly.com/category/deals/hd-10-weeks.do Already available are: JavaScript path to Mastery set at http://shop.oreilly.com/category/deals/hd-javascript-path.do I recommend JavaScript: The Definitive Guide, 6th Edition- PDF is 50% off at http://shop.oreilly.com/product/9780596805531.do HTML 5 Programming set at http://shop.oreilly.com/category/deals/hd-html5.do Again the PDF's are 50% off.

    Read the article

  • How do I fix postfix TLS?

    - by Savanni D'Gerinel
    STARTTLS was working with my system earlier today. Without me altering the system in any way, it spontaneously broke. I've now been trying to fix it for a couple of hours, to no success. When I connect to the server, this is what I get: savanni@Orolo:~$ telnet apps.savannidgerinel.com 25 Trying 129.121.182.135... Connected to apps.sasavanni@Orolo:~$ telnet apps.savannidgerinel.com 25 Trying 129.121.182.135... Connected to apps.savannidgerinel.com. Escape character is '^]'. 220 *********************************************** ehlo dude 250-apps.savannidgerinel.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-XXXXXXXA 250-AUTH PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN ^]close telnet> close Connection closed. Okay, obviously STARTTLS isn't present in this list. So I've been digging through my configuration files and working through the tutorials again, and that has done me no good at all. Here's my tls-related configuration: smtp_tls_CAfile = /etc/ssl/certs/savannidgerinel_com_CA.pem smtp_tls_cert_file = /etc/ssl/certs/apps.savannidgerinel.com.pem smtp_tls_key_file = /etc/ssl/private/apps.savannidgerinel.com.key.pem smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_tls_CAfile = /etc/ssl/certs/savannidgerinel_com_CA.pem smtpd_tls_cert_file = /etc/ssl/certs/apps.savannidgerinel.com.pem smtpd_tls_key_file = /etc/ssl/private/apps.savannidgerinel.com.key.pem smtpd_tls_loglevel = 3 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom All of the certificate files are present, the server private key is present, the server CA is present, and the smtpd_scache.db and smtp_scache.db files are both present. All are accessible to the postfix user. Speaking of which, here are the processes running: savanni@apps:/var/lib/postfix$ ps aux | grep postfix root 3525 0.0 0.1 25112 1680 ? Ss 20:19 0:00 /usr/lib/postfix/master postfix 3526 0.0 0.1 27176 1524 ? S 20:19 0:00 pickup -l -t fifo -u -c -o content_filter= -o receive_override_options=no_header_body_checks postfix 3527 0.0 0.1 27228 1552 ? S 20:19 0:00 qmgr -l -t fifo -u postfix 3528 0.0 0.4 46948 4144 ? S 20:19 0:00 smtpd -n smtp -t inet -u -c -o stress= -s 2 postfix 3529 0.0 0.1 27176 1628 ? S 20:19 0:00 proxymap -t unix -u postfix 3530 0.0 0.3 38212 3176 ? S 20:19 0:00 tlsmgr -l -t unix -u -c postfix 3531 0.0 0.1 27176 1516 ? S 20:19 0:00 anvil -l -t unix -u -c postfix 3535 0.0 0.1 27188 1544 ? S 20:20 0:00 trivial-rewrite -n rewrite -t unix -u -c The log files say absolutely nothing related to TLS except for this: Nov 6 02:19:45 apps postfix/master[3525]: daemon started -- version 2.9.6, configuration /etc/postfix Nov 6 02:19:49 apps postfix/smtpd[3528]: initializing the server-side TLS engine Nov 6 02:19:49 apps postfix/tlsmgr[3530]: open smtpd TLS cache btree:/var/lib/postfix/smtpd_scache Nov 6 02:19:49 apps postfix/tlsmgr[3530]: tlsmgr_cache_run_event: start TLS smtpd session cache cleanup Nov 6 02:19:49 apps postfix/smtpd[3528]: connect from unknown[204.16.68.108] Neither syslog nor mail.err shows any indication of a problem. As far as the whole system is concerned, all is well. But there is no STARTTLS and so I suddenly can't send any email at all. Help???

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >