Search Results

Search found 205 results on 9 pages for 'pierre jeptha'.

Page 1/9 | 1 2 3 4 5 6 7 8 9  | Next Page >

  • Trois fondamentaux de JavaScript, par Jean-Pierre Vincent

    Après quelques années à écrire dans un langage, on finit facilement par oublier les premières difficultés que l'on avait rencontrées Nous allons explorer ici les trois notions fondamentales de JavaScript qui sont probablement les plus grosses sources de bogues, d'incompréhension et de frustration pour le développeur Web moyen. Et qui accessoirement sont la base d'une programmation plus évoluée par la suite.

    Read the article

  • Helix, mener une analyse forensique suite à une intrusion , par Pierre Therrode

    Avec l'ère de l'internet, nous sommes tous conscients d'être des cibles potentielles pour les pirates. En effet, la criminalité sur le Net est de plus en plus présente. Dans cette nouvelle perspectives de technologies, les intrusions dans un système touchant aussi bien les simples utilisateurs que les entreprises. Toutefois, il est possible que de tels agissements puissent être sauvegardés afin de les analyser comme éléments de preuve.

    Read the article

  • The Art of Agile Development de J.Shore et S.Warden, critique par Pierre Chauvin

    Bonjour, Je viens de terminer la lecture de "The Art of Agile Development" de James Shore, Shane Warden, publié chey O'Reilly, dont vous trouverez ma critique ici. [IMG]http://images-eu.amazon.com/images/P/0596527675.01.LZZZZZZZ.jpg[/IMG] Mes impressions sont plutôt bonnes. Et vous, que pensez-vous de ce livre ? vous a t-il aidé à guider vos projets sur des préceptes agiles ?, promouvoir XP ou Scrum dans vos équipes ?...

    Read the article

  • Petit traité d'attaques subversives contre les entreprises,de Emmanuel Lehmann et Franck Decloquement, critique par Therrode Pierre

    Bonjour, La rédaction de DVP a lu pour vous l'ouvrage suivant: Petit traité d'attaques subversives contre les entreprises, théorie et pratique de la contre-ingérence économique de Emmanuel Lehmann et Franck Decloquement, paru aux éditions Editions Chiron. [IMG]http://images-eu.amazon.com/images/P/2702712894.08.LZZZZZZZ.jpg[/IMG] Citation: Secrets de fabrication percés à jour, vol de fichiers clients, détournements des p...

    Read the article

  • mysql UDF : fopen = permission denied

    - by lindenb
    Hi All, this is question I already asked on SO but I wonder if this could be a SysAdmin problem. I'm trying to create a mysql UDF function , this function calls "fopen/fclose" to read a flat file stored in /data. But using errno (yes, I know it is bad in a MT program...) I can see that the function cannot open my file: "Permission denied" I tried to do a chmod -R 755 /data (as well as 777, chown -R mysql:mysql /data etc...) but it didn't change anything. when I copied the flat file to /tmp : OK, my UDF was able to 'fopen' the file. I'm puzzled. currently , I've got: drwxrwxrwx 4 pierre root 4096 2010-05-26 16:51 /data drwxrwxrwx 3 pierre root 4096 2010-05-18 09:41 /data/dir1 drwxrwxrwx 3 pierre root 4096 2010-05-18 09:41 /data/dir1/dir2 drwxrwxrwx 4 pierre root 4096 2010-05-18 10:27 /data/dir1/dir2/dir3 -rw-r--r-- 1 pierre root 50685268 2005-12-10 00:01 /data/dir1/dir2/dir3/myfile.txt Any idea ?

    Read the article

  • Relay Access Denied (State 13) Postfix + Dovecot + Mysql

    - by Pierre Jeptha
    So we have been scratching our heads for quite some time over this relay issue that has presented itself since we re-built our mail-server after a failed Webmin update. We are running Debian Karmic with postfix 2.6.5 and Dovecot 1.1.11, sourcing from a Mysql database and authenticating with SASL2 and PAM. Here are the symptoms of our problem: 1) When users are on our local network they can send and receive 100% perfectly fine. 2) When users are off our local network and try to send to domains not of this mail server (ie. gmail) they get the "Relay Access Denied" error. However users can send to domains of this mail server when off the local network fine. 3) We host several virtual domains on this mailserver, the primary domain being airnet.ca. The rest of our virtual domains (ex. jeptha.ca) cannot receive email from domains not hosted by this mailserver (ie. gmail and such cannot send to them). They receive bounce backs of "Relay Access Denied (State 13)". This is regardless of whether they are on our local network or not, which is why it is so urgent for us to get this solved. Here is our main.cf from postfix: myhostname = mail.airnet.ca mydomain = airnet.ca smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no smtpd_sasl_type = dovecot queue_directory = /var/spool/postfix smtpd_sasl_path = private/auth smtpd_sender_restrictions = permit_mynetworks permit_sasl_authenticated smtp_sasl_auth_enable = yes smtpd_sasl_auth_enable = yes append_dot_mydomain = no readme_directory = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_auth_only = no alias_maps = proxy:mysql:/etc/postfix/mysql/alias.cf hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = mail.airnet.ca, airnet.ca, localhost.$mydomain mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + local_recipient_maps = $alias_maps $virtual_mailbox_maps proxy:unix:passwd.byname home_mailbox = /var/virtual/ mail_spool_directory = /var/spool/mail mailbox_transport = maildrop smtpd_helo_required = yes disable_vrfy_command = yes smtpd_etrn_restrictions = reject smtpd_data_restrictions = reject_unauth_pipelining, permit show_user_unknown_table_name = no proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps $virtual_uid_maps $virtual_gid_maps virtual_alias_domains = message_size_limit = 20971520 transport_maps = proxy:mysql:/etc/postfix/mysql/vdomain.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql/vmailbox.cf virtual_alias_maps = proxy:mysql:/etc/postfix/mysql/alias.cf hash:/etc/mailman/aliases virtual_uid_maps = proxy:mysql:/etc/postfix/mysql/vuid.cf virtual_gid_maps = proxy:mysql:/etc/postfix/mysql/vgid.cf virtual_mailbox_base = / virtual_mailbox_limit = 209715200 virtual_mailbox_extended = yes virtual_create_maildirsize = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql/vmlimit.cf virtual_mailbox_limit_override = yes virtual_mailbox_limit_inbox = no virtual_overquote_bounce = yes virtual_minimum_uid = 1 maximal_queue_lifetime = 1d bounce_queue_lifetime = 4h delay_warning_time = 1h append_dot_mydomain = no qmgr_message_active_limit = 500 broken_sasl_auth_clients = yes smtpd_sasl_path = private/auth smtpd_sasl_local_domain = $myhostname smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes smtp_bind_address = 142.46.193.6 relay_domains = $mydestination mynetworks = 127.0.0.0, 142.46.193.0/25 inet_interfaces = all inet_protocols = all And here is the master.cf from postfix: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} spfpolicy unix - n n - - spawn user=nobody argv=/usr/bin/perl /usr/sbin/postfix-policyd-spf-perl smtp-amavis unix - - n - 4 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes #127.0.0.1:10025 inet n - n - - smtpd dovecot unix - n n - - pipe flags=DRhu user=dovecot:21pever1lcha0s argv=/usr/lib/dovecot/deliver -d ${recipient Here is Dovecot.conf protocols = imap imaps pop3 pop3s disable_plaintext_auth = no log_path = /etc/dovecot/logs/err info_log_path = /etc/dovecot/logs/info log_timestamp = "%Y-%m-%d %H:%M:%S ". syslog_facility = mail ssl_listen = 142.46.193.6 ssl_disable = no ssl_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem ssl_key_file = /etc/ssl/private/ssl-cert-snakeoil.key mail_location = mbox:~/mail:INBOX=/var/virtual/%d/mail/%u mail_privileged_group = mail mail_debug = yes protocol imap { login_executable = /usr/lib/dovecot/imap-login mail_executable = /usr/lib/dovecot/rawlog /usr/lib/dovecot/imap mail_executable = /usr/lib/dovecot/gdbhelper /usr/lib/dovecot/imap mail_executable = /usr/lib/dovecot/imap imap_max_line_length = 65536 mail_max_userip_connections = 20 mail_plugin_dir = /usr/lib/dovecot/modules/imap login_greeting_capability = yes } protocol pop3 { login_executable = /usr/lib/dovecot/pop3-login mail_executable = /usr/lib/dovecot/pop3 pop3_enable_last = no pop3_uidl_format = %08Xu%08Xv mail_max_userip_connections = 10 mail_plugin_dir = /usr/lib/dovecot/modules/pop3 } protocol managesieve { sieve=~/.dovecot.sieve sieve_storage=~/sieve } mail_plugin_dir = /usr/lib/dovecot/modules/lda auth_executable = /usr/lib/dovecot/dovecot-auth auth_process_size = 256 auth_cache_ttl = 3600 auth_cache_negative_ttl = 3600 auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@ auth_verbose = yes auth_debug = yes auth_debug_passwords = yes auth_worker_max_count = 60 auth_failure_delay = 2 auth default { mechanisms = plain login passdb sql { args = /etc/dovecot/dovecot-sql.conf } userdb sql { args = /etc/dovecot/dovecot-sql.conf } socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } master { path = /var/run/dovecot/auth-master mode = 0600 } } } Please, if you require anything do not hesistate, I will post it ASAP. Any help or suggestions are greatly appreciated! Thanks, Pierre

    Read the article

  • AFP and SAMBA shares are confusing my MacOS X Lion

    - by Pierre
    I've setup my file server running ubuntu 12.04 with AFP/Avahi and Samba shares. Simple configuration, nothing too complicated. I've given my AFP machine a different name than %h to avoid confusion when looking into my Finder. However, my mac os lion mbp still confuses the two and the only way to access the AFP shares is via direct connect to IP address. The samba machine name seems to be hardwired to hostname (I can only see that that workgroup can be changed). Is there anything I am missing or obvious that I need to set? Best regards and thank you in advanced Pierre

    Read the article

  • Visual Studio Talk Show #120 is now online - Visualisation et analyse de code dans Visual Studio 201

    http://www.visualstudiotalkshow.com JP Duplessis: Visualisation et analyse de code dans Visual Studio 2010 Ultimate Mario profite de sa prsence au campus de Microsoft Redmond au tats-Unis pour discuter de visualisation et d'analyse de code avec Jean-Pierre Duplessis. Pour l'occasion Mario est accompagn d'un coanimateur d'un jour soit tienne Tremblay qui lui aussi se trouvait au campus de Microsoft au mme moment. Jean-Pierre Duplessis est architecte chez Microsoft dans la division Visual Studio....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Visual Studio Talk Show #120 is now online - Visualisation et analyse de code dans Visual Studio 201

    - by guybarrette
    http://www.visualstudiotalkshow.com JP Duplessis: Visualisation et analyse de code dans Visual Studio 2010 Ultimate Mario profite de sa présence au campus de Microsoft à Redmond au États-Unis pour discuter de visualisation et d'analyse de code avec Jean-Pierre Duplessis. Pour l'occasion Mario est accompagné d'un coanimateur d'un jour soit Étienne Tremblay qui lui aussi se trouvait au campus de Microsoft au même moment. Jean-Pierre Duplessis est architecte chez Microsoft dans la division Visual Studio. Il est un vétéran de longue date de Microsoft. Il a débuté avec l'équipe de développement de Microsoft Host Integration Server. Ensuite, il a été responsable de concevoir la connexion aux réseaux sans-fil sous Windows NT. Ces dernières années, son travail avec l'équipe Visual Studio lui a permis de retourner à sa première passion soit l'analyse de code pour permettre de visualiser et comprendre l'architecture d'une application existante. var addthis_pub="guybarrette";

    Read the article

  • R: Pass by reference

    - by Pierre
    Can you pass by reference with "R" ? for example, in the following code: setClass("MyClass", representation( name="character" )) instance1 <-new("MyClass",name="Hello1") instance2 <-new("MyClass",name="Hello2") array = c(instance1,instance2) instance1 array instance1@name="World!" instance1 array the output is > instance1 An object of class “MyClass” Slot "name": [1] "World!" > array [[1]] An object of class “MyClass” Slot "name": [1] "Hello1" [[2]] An object of class “MyClass” Slot "name": [1] "Hello2" but I wish it was > instance1 An object of class “MyClass” Slot "name": [1] "World!" > array [[1]] An object of class “MyClass” Slot "name": [1] "World!" [[2]] An object of class “MyClass” Slot "name": [1] "Hello2" is it possible ? Thanks Pierre

    Read the article

  • uzing libgz to inflate a gz input

    - by Pierre
    Hi all, I'm currently trying to use the zlib to inflate a source of gzipped data. It seems that the inflate API in zlib cannot inflate a gzipped data ( The example http://www.zlib.net/zpipe.c fails to read a gzipped file: "zpipe: invalid or incomplete deflate data" ). I noticed that there is a gzopen function in this API, but , as far as I understand, it only works with a filename or a file descriptor. Can I use this API if my source of gzipped data is stored in memory, in a sql blob, etc... ? Many Thanks Pierre

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • Big Data Accelerator

    - by Jean-Pierre Dijcks
    For everyone who does not regularly listen to earnings calls, Oracle's Q4 call was interesting (as it mostly is). One of the announcements in the call was the Big Data Accelerator from Oracle (Seeking Alpha link here - slightly tweaked for correctness shown below):  "The big data accelerator includes some of the standard open source software, HDFS, the file system and a number of other pieces, but also some Oracle components that we think can dramatically speed up the entire map-reduce process. And will be particularly attractive to Java programmers [...]. There are some interesting applications they do, ETL is one. Log processing is another. We're going to have a lot of those features, functions and pre-built applications in our big data accelerator."  Not much else we can say right now, more on this (and Big Data in general) at Openworld!

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Data Warehouse Best Practices

    - by jean-pierre.dijcks
    In our quest to share our endless wisdom (ahem…) one of the things we figured might be handy is recording some of the best practices for data warehousing. And so we did. And, we did some more… We now have recreated our websites on Oracle Technology Network and have a separate page for best practices, parallelism and other cool topics related to data warehousing. But the main topic of this post is the set of recorded best practices. Here is what is available (and it is a series that ties together but can be read independently), applicable for almost any database version: Partitioning 3NF schema design for a data warehouse Star schema design Data Loading Parallel Execution Optimizer and Stats management The best practices page has a lot of other useful information so have a look here.

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Live from ODTUG - Big Data and SQL session #2

    - by Jean-Pierre Dijcks
    Sitting in Dominic Delmolino's session at ODTUG (KScope 12). If the session count at conferences is any indication then we will see more and more people start to deploy MapReduce in the database. And yes, that would be with SQL and PL/SQL first and foremost. Both Dominic and our own Bryn Llewellyn are doing MapReduce in the database presentations.  Since I have seen both, I would advice people to first look through Dominic's session to get a good grasp on what mappers do and what reducers do, then dive into Bryn's for a bunch of PL/SQL example. The thing I like about Dominic's is the last slide (a recursive WITH statement) to do this in SQL... Now I am hoping that next year we will see tools vendors show off how they work with Hadoop and MapReduce (at least talking about the concepts!!).

    Read the article

  • How to assign foo.example.com to one IP address and example.com to a different one?

    - by Guillaume Pierre
    Say that I have a domain name called example.com and two server located at 2 different IP addresses: 1.2.3.4 and 6.7.8.9. How could I assign example.com to 1.2.3.4 and the subdomain foo.example.com to 6.7.8.9? [EDIT] I did try to put a A record linking from @ to the first IP address and from foo.example.com to the second IP address, as illustrated below: And I did configure a vhosts called foo.example.com on my server at IP address 2. The @ record works. But after 3 hours waiting for the result (DNS delay), nothing happened with foo.example.com, which link to nothing. Why?

    Read the article

  • Initializing Problem when using the Catalyst Control Center with fglrx

    - by Pierre
    I activated, or installed the Catalyst 11.8 driver, which is the non postrelease one, I think. When I go into Applications - Other - Catalyst Control Center, I get an error saying "There was a problem initializing Catalyst Control Center Linux edition..." Also, when I open up a terminal and use fglrxinfo I get a X error, it says BadRequest (invalid request code or no such operation). I'm not using Unity, I'm using Gnome Classic. Video card is a ATI Mobility Raedeon HD 5470.

    Read the article

1 2 3 4 5 6 7 8 9  | Next Page >