Search Results

Search found 485 results on 20 pages for 'jean nicolas'.

Page 7/20 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • When Web page is displayed in a div, Error: Unable to modify the parent container element before the

    - by Jean
    Hello, A url is being displayed in a div as shown below <div id="ipage"> <? $url = "http://www.yahoo.com"; $file1 = fopen($url, "r"); $content = file_get_contents($url); echo $content; ?> </div> And I receive this error Webpage error details Message: HTML Parsing Error: Unable to modify the parent container element before the child element is closed (KB927917) Bright idea to solve this. Thanks Jean

    Read the article

  • Get id from a one div to another div

    - by Jean
    Hello Here are the two divs <div id="1"> <span id="a1">first</span> <span id="a2b">first</span> <span id="a3">first</span> </div> Click <div id=2></div> And the jQuery code to do $('#2').bind('click',function(){ var xx = $('#a'+i).attr('id'); $('#2').append(xx); i=i+1; }); It does not get all the 3 id's from #1 Thanks Jean

    Read the article

  • Keep text on left and middle of the div, not center

    - by Jean
    Hello, I want to be able to keep a text on the left, but in the middle of a div. <div id=sel>text goes here</div> and my CSS for the same sel{ text-align:left; vertical-align:middle; } The characters and lines of the text may vary. I am more focussed on the text with a single line that sits on the top. I do not want to use position:absolute/relative. Appreciate all help. Thanks Jean

    Read the article

  • on .bind('click') it is not deleting the first div

    - by Jean
    Hello, When I click on a particular div, that div should fade out, simple, but when I click on one of the divs it deletes the div on top of the stack. ie., when I click #sel6 it removes sel5 html code <div id="selc" class="selc" style="position:absolute; left:15px; top:200px; width:260px;"> <div id="sel5" class="sel">something</div> <div id="sel6" class="sel">something</div> <div id="sel7" class="sel">something</div> </div jquery code sel_id, sel_1 are variables $('.selc').bind('click',function(){ var sel_id = $('.sel').attr('id'); alert(sel_id); $('#'+sel_id).fadeOut('slow'); $('#'+sel_id).remove(); $('.search_box').append(sel_1); }); Thanks Jean

    Read the article

  • I want to get the value of an id from a nested div - jquery

    - by Jean
    Hello I want to obtain the .text() of #inner2 <div class="outer" id="outer"> <div id="inner1" class="inner">test1</div> <div id="inner2" class="inner">test2</div> <div id="inner3" class="inner">test3</div> </div> This is the jquery function I am using $('.outer').bind('click',function() { var one = $('#inner'+x).attr('id'); alert(one); }); The problem is the first #id value is show in the alert. Thanks Jean

    Read the article

  • How to install a downloaded Ruby gem file ?

    - by JCLL
    How does "gem install" works ? It is not intuitive... My gem is really here : [root@localhost Téléchargement]# ll *.gem -rw-rw-r-- 1 jean jean 16353818 mar 5 11:39 ruby-processing-1.0.9.gem But an idiomatic "gem install" does not see it... [root@localhost Téléchargement]# gem install ruby-processing-1.0.9.gem ERROR: could not find gem ruby-processing-1.0.9.gem locally or in a repository What's wrong with that ? Thx JC

    Read the article

  • Display all feeds using simplexml_load() using PHP

    - by Jean
    Hello, I want to loop to get all feeds, but displaying only one $url = "http://localhost/feeds/feeds.rss"; $xml = simplexml_load_file($url); foreach($xml->item as $result){ echo $result->description."<br>"; } RSS Feed is - <channel> <title>/</title> <link>/</link> <atom:link type="application/rss+xml" href="/" rel="self"/> <description>/</description> <language>/</language> <ttl>/</ttl> <item> <title>/</title> <description>/</description> <pubDate>/</pubDate> <guid>/</guid> <link>/</link> </item> <item> <title>/</title> <description>/</description> <pubDate>/</pubDate> <guid>/</guid> <link>/</link> </item> </channel> Thanks Jean

    Read the article

  • JQuery SelectList Option Changed doesn't refresh

    - by Jean-Philippe
    Hi. I have this select list: <select url="/Admin/SubCategories" name="TopParentId" id="ParentList"> <option value="">Please select a parent category</option> <option value="1" selected="selected">New Store</option> <option value="2">Extensions</option> <option value="3">Remodel</option> <option value="4">Bespoke Signage</option> <option value="5">Tactical Signage</option> <option value="6">Size Chart</option> <option value="7">Contact Info</option> </select> As you can see the option 1 is marked as selected. When I change the selection, I use this code to do an ajax call to get some values to populate a new select list: $("#ParentList").unbind("change"); $("#ParentList").change(function() { var itemId = $(this).val(); var url = $(this).attr("url"); var options; $.getJSON(url, itemId, function(data) { var defaultoption = '<option value="0">Please select a sub-category</option>'; options += defaultoption; $.each(data, function(index, optionData) { var option = '<option value="' + optionData.valueOf + '">' + optionData.Text + '</option>'; options += option; }); $("#SubParentList").html(options); }); }); My problem is that whenever I change the selection, the itemId is always the id of option 1, because it is marked as selected. It doesn't pick up the value of the option it is being changed too. Can someone please enlighten me of their knowledge. Regards, Jean-Philippe

    Read the article

  • get id parameter and increment count for each click in jquery

    - by Jean
    Hello, I want to get the id value of c_i. And each time c_i is clicked I want to increment the value at chk_count, using jquery <div id='d<? echo $i; ?>' style='margin-bottom:8px; border:#cccccc thin solid; height:25px;'> <span style='color:#cccccc; margin-right:5px;'><? echo $i; ?></span> <span><? echo $row_dw_all['d1e']; ?></span> <span style='position:absolute;right:0px;'> <input type='checkbox' name='c_i<? echo $i; ?>' id="c_i<? echo $i; ?>" value='<? echo $row_dw_all['dle']; ?>'> </span> </div> <input type="hidden" name="chk_count" id="chk_count" value="" /> Thanks Jean

    Read the article

  • Oracle Communications Data Model

    - by jean-pierre.dijcks
    I've mentioned OCDM in previous posts but found the following (see end of the post) podcast on the topic and figured it is worthwhile to spread the news some more. ORetailDM and OCommunicationsDM are the two data models currently available from Oracle. Both are intended to capture: Business best practices and industry knowledge Pre-built advanced analytics intended to predict future events before they happen (like the Churn model shown below) Oracle technology best practices to ensure optimal performance of the model All of this typically comes with a reduced time to implementation, or as the marketing slogan goes, reduced time to value. Here are the links: Podcast on OCDM OTN pages for OCDM and ORDM

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • Big Data Accelerator

    - by Jean-Pierre Dijcks
    For everyone who does not regularly listen to earnings calls, Oracle's Q4 call was interesting (as it mostly is). One of the announcements in the call was the Big Data Accelerator from Oracle (Seeking Alpha link here - slightly tweaked for correctness shown below):  "The big data accelerator includes some of the standard open source software, HDFS, the file system and a number of other pieces, but also some Oracle components that we think can dramatically speed up the entire map-reduce process. And will be particularly attractive to Java programmers [...]. There are some interesting applications they do, ETL is one. Log processing is another. We're going to have a lot of those features, functions and pre-built applications in our big data accelerator."  Not much else we can say right now, more on this (and Big Data in general) at Openworld!

    Read the article

  • "Software sources" crashes since unbuntu updated from 12.04 to 12.10

    - by Jean-Sebastien
    First of all, sorry for my English, I is not my native language. I recently updated my PC from ubuntu 12.04 to 12.10. Now “Software sources” crashes when I try to open it directly from Unity or from Ubuntu Software Center ? Software sources. When I try to open “Update manager”, I get the following error message. Note that the internet connection WROKS! W:Failed to fetch http://ppa.launchpad.net/rye/ubuntuone-extras/ubuntu/dists/quantal/main/source/Sources 404 Not Found, ... ... ... E:Some index files failed to download. They have been ignored, or old ones used instead. Please, can somebody help me on this? JS

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Data Warehouse Best Practices

    - by jean-pierre.dijcks
    In our quest to share our endless wisdom (ahem…) one of the things we figured might be handy is recording some of the best practices for data warehousing. And so we did. And, we did some more… We now have recreated our websites on Oracle Technology Network and have a separate page for best practices, parallelism and other cool topics related to data warehousing. But the main topic of this post is the set of recorded best practices. Here is what is available (and it is a series that ties together but can be read independently), applicable for almost any database version: Partitioning 3NF schema design for a data warehouse Star schema design Data Loading Parallel Execution Optimizer and Stats management The best practices page has a lot of other useful information so have a look here.

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Oracle Database Machine and Exadata Storage Server

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Oracle Database Machine and Exadata Storage Server (Doc ID 1187674.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Exadata and Oracle Database Machine environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: • Database Machine and Exadata Storage Server Concepts and Overview• Database Machine and Exadata Storage Server Configuration and Administration• Database Machine and Exadata Storage Server Troubleshooting and Debugging• Database Machine and Exadata Storage Server Best Practices• Database Machine and Exadata Storage Server Patching• Database Machine and Exadata Storage Server Documentation and References• Database Machine and Exadata Storage Server Known Problems• ASM and RAC Documentation• Using My Oracle Support Effectively

    Read the article

  • Assembly load and execute issue

    - by Jean Carlos Suárez Marranzini
    I'm trying to develop Assembly code allowing me to load and execute(by input of the user) 2 other Assembly .EXE programs. I'm having two problems: -I don't seem to be able to assign the pathname to a valid register(Or maybe incorrect syntax) -I need to be able to execute the other program after the first one (could be either) started its execution. This is what I have so far: mov ax,cs ; moving code segment to data segment mov ds,ax mov ah,1h ; here I read from keyboard int 21h mov dl,al cmp al,'1' ; if 1 jump to LOADRUN1 JE LOADRUN1 popf cmp al,'2' ; if 1 jump to LOADRUN2 JE LOADRUN2 popf LOADRUN1: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME1] ; Not sure if it works INT 21H LOADRUN2: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME2] ; Not sure if it works INT 21H ; Here I define the bytes containing the pathnames PROGNAME1 db 'C:\Users\Usuario\NASM\Adding.exe',0 PROGNAME2 db 'C:\Users\Usuario\NASM\Substracting.exe',0 I just don't know how start another program by input in the 'parent' program, after one is already executing. Thanks in advance for your help! Any additional information I'll be more than happy to provide. -I'm using NASM 16 bits, Windows 7 32 bits.

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Live from ODTUG - Big Data and SQL session #2

    - by Jean-Pierre Dijcks
    Sitting in Dominic Delmolino's session at ODTUG (KScope 12). If the session count at conferences is any indication then we will see more and more people start to deploy MapReduce in the database. And yes, that would be with SQL and PL/SQL first and foremost. Both Dominic and our own Bryn Llewellyn are doing MapReduce in the database presentations.  Since I have seen both, I would advice people to first look through Dominic's session to get a good grasp on what mappers do and what reducers do, then dive into Bryn's for a bunch of PL/SQL example. The thing I like about Dominic's is the last slide (a recursive WITH statement) to do this in SQL... Now I am hoping that next year we will see tools vendors show off how they work with Hadoop and MapReduce (at least talking about the concepts!!).

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >