Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 85/128 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • links for 2011-02-02

    - by Bob Rhubart
    Oracle WebLogic's YouTube Channel Tired of reading? Watch and learn.  (tags: oracle weblogic youtube) Chris Muir: JDev: ADF BC and ADF Libraries: The Library Private property Oracle ACE Director Chris Muir illustrates the use of ADF Libraries. (tags: oracle otn oracleace) Satyajith Nair: Starting a SOA Project? Don't forget the Service Registry!! Satyajith Nair argues that "even with small projects, adding a Service Registry in the architecture has significant benefits with regards to Developer Productivity, Build Quality, Ease of maintenance and Architecture Flexibility." (tags: oracle infosys soa serviceregistry soagovernance) Matthias Wessendorf: Weblog Apache MyFaces in the cloud - Part II Matthias Wessendorf says: "A few days ago I did a quick test to see how well Apache MyFaces works in the cloud. The Amazon Beanstalk test was a success." (tags: oracle apache cloud) SOA & E2.0 Partner Community Forum - registration is open! (SOA Partner Community Blog) Jurgen Kress with an update on next month's event in the Netherlands. (tags: oracle otn soa enterprise2.0) Well Hello There, WebCenter PS3 VM | The AppsLab Jake Kuramoto says: "You may recall my struggle last year to assemble and document a WebCenter VM. Well, apparently other people thought it was a good idea..." (tags: oracle otn enterprise2.0 webcenter) InfoQ: Proving the Business Value of SOA Anne Thomas Manes discusses SOA goals and objectives used to measure a system’s success, measuring the business value, SOA project metrics, and recommendations on proving the business value of SOA. (tags: ping.fm) InfoQ: Amazon Will Offer Oracle Database 11g on RDS Amazon will offer Oracle Database 11g on RDS which brings patching, backup, replication, and failover support to Oracle’s database. (tags: ping.fm)

    Read the article

  • MySQL HA Events in the UK, Germany & France

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Times"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p { margin: 0cm 0cm 0.0001pt; font-size: 10pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Oracle is running MySQL High Availability breakfast seminars in London, Düsseldorf and Paris. During these free seminars, we will review the various options and technologies at your disposal to implement highly available & highly scalable MySQL infrastructures, as well as best practices in terms of architectures. The packed agenda will include High Availability features in MySQL 5.5, MySQL Replication, MySQL Cluster, the newly released Oracle VM Template for MySQL Enterprise Edition as well as what's new in MySQL 5.6 and MySQL Cluster 7.2, including NoSQL access methods to MySQL. There are a few places left for the seminar in London taking place tomorrow, June 29th, Register Now! Learn more and register for the event in Düsseldorf, July 13th. Learn more and register for the event in Paris, September 7th. We hope to see you there!

    Read the article

  • WebLogic 12 hands-on bootcamps for partners–new dates & locations

    - by JuergenKress
    We offer free 2 days hands-on WebLogic 12c workshops for Oracle partners who want to become WebLogic Specialized: Register Here! Highlights of the workshop Quotes from previous Workshops Environment Setup and Weblogic Installation hands-on lab Weblogic Session Sharing hands-on lab Coherence hands-on lab WLS Session Replication with Coherence Web hands-on lab Weblogic Troubleshooting hands-on lab Weblogic JMS hands-on lab Exalogic & Oracle Cloud overview Oracle Enterprise Manager overview Oracle trainings are the best" Pedro Neto Novabase "Excellent training, well organized" Pedro Antunh, Capgemini "This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps" Leonardo Fernandes, Outsystems The event dates are following: Belgium 3rd - 4th October 2012 Oracle Vilvoorde South Africa 3rd –4th October 2012 Oracle Johannesburg Switzerland 25th - 26th October 2012 Oracle Baden-Dättwil Denmark 30th - 31st October 2012 Oracle Ballerup Norway 6th - 7th November 2012 Oracle Lysaker Netherlands 18th - 20th December 2012 Oracle Utrecht WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic Bootcamp,WebLogic training,education,training,PTS,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Data Movement and the Decision Matrix

    - by BuckWoody
    Maybe it’s my military background, or maybe I’ve always had this predilection, but I like to use two devices when I need to make a complex decision: A checklist and a decision matrix. I like to use a checklist because it ensures that I remember the big bits of what I need to do, and brings up questions or areas that I didn’t think about when evaluating options for the decision. And the decision matrix – that’s the thing I use to actually lay out those options. It’s simply a spreadsheet-like grid (I use Excel, but paper and pencil works as well) that lays out the requirements or advantages for the decision across the top, and the options I have on the left-hand side. Then in the “cells” I put whether or not that option on the left will meet the requirement in that column. I then simply “weight” each cell to organize the choices by best-fit. The right answer (or answers) will float right to the top. I was asked yesterday about options for moving data in SQL Server to another system. There are just dozens of ways to do this, from bcp to Replication, each with certain advantages and costs. But asking the questions for the top row first helped me show the person that it isn’t a particular technology that is important, it’s laying out those requirements and thinking about which elements are more important than the other. For instance, is it more important to have the data moved all the time, or is it OK if that happens once in a while? Does the data have to move in two directions or just one? All of these will help that answer jump right out. Try it sometime – it’s a great learning exercise, since it will force you to focus on filling out the matrix. The answer is out there, Neo. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

    - by Orgad Kimchi
    This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle Solaris for a MongoDB cluster: • You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster. • In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort. • You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools. • ZFS built-in compression provides optimized disk I/O utilization for better I/O performance. In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Oracle OpenWorld Highlights

    - by Doug Reid
    We are in the final days of Oracle OpenWorld 2012 and the data integration team have been hard at work giving sessions, meeting customers, demonstrating product and conducting hands-on labs.    It has been a great conference, but the best part is meeting our customers and learning about all the great implementations of our products.  Wednesday was the last day that the exhibition hall was open and attendees were getting in their final opportunities to see our products and meet with the product management team.   Two hours before the close of the hall, people lined up to learn about GoldenGate 11gR2, Monitor, Adapters, Veridata, and all the different use cases.    Here's a picture of Sjaak Vossepoel, who is our DIS Sales Consulting Manager for EMEA speaking to a potential customer on the options of using Oracle GoldenGate for heterogenous data replication.  Over the last two days, the GoldenGate team ran two labs; Introduction to Oracle GoldenGate Veridata and Deep Dive into Oracle GoldenGate.   Both of the labs were completely booked out and unfortunately we had to turn away people.   BUT,  all of our labs were recorded recently so if you were not able to get into the lab or did not have enough time to complete your labs, visit youtube.com/oraclegoldengate to see a  complete recording of the labs we used at OpenWorld plus more.  Here are a couple pictures from the Deep Dive into Oracle GoldenGate lead by Chis Lawless from the Product Management team.   Thanks to the GoldenGate Hands-on Lab team for putting on a great session!!! We will post more information about where you can find additional details on OpenWorld as they become public.   

    Read the article

  • Late feedback

    - by Sveta Smirnova
    MySQL Community team asked me to write about Devconf 2013 few months ago. Conference was in June, 2013, but I remembered about this my promise only now: month later after my participating in MySQL Connect and Expert Troubleshooting seminar (change country to United Kingdom if you see blank page). I think it is too late for the feedback, but I still have few thoughts which I want to record.DevConf (former PHPConf) always was a place where I tried new topics. At first, because I know audience there very well and they will be bored if I repeat a story which I was telling last year, but also because it is much easier to get feedback in your own native language. But last years my habit seems started to change and I presented improved version of my 2012 MySQL Connect talk about MySQL backups. Of course, I also had a seminar with unique topic, made for this conference first time: Troubleshooting MySQL Performance with EXPLAIN and Using Performance Schema to Troubleshoot MySQL. And these topics, improved, were presented at the expert seminar. It is interesting how my habit changes and one public speaking activity interferes next one.What is good about DevConf is it forces you to create new ideas and do it really well, because audience is not forgiving at all, so they catch everything you miss or prepared not good enough. This can be bad if you want to make a marketing-style topic for free, but allows to present technical features in really good details: all these sudden discussions really help.In year 2013 Oracle had a booth at the conference and was presented by a bunch of people. Dmitry Lenev presented topic "New features of replication in MySQL 5.6" and Victoria Reznichenko worked on the booth. What was new at the conference this year is greater interest in NoSQL, scale and fast development solutions. This, unfortunately, means not so huge interest in MySQL as it was earlier. However, at the same time, "Common" track was really MySQL track: not only Oracle, but people from other companies presented about it.

    Read the article

  • Microsoft Lowers Cloud Barrier To Entry

    - by Herve Roggero
    Once in a while, the technology stack changes enough to create a disturbance in the IT industry. Microsoft did just that today and has officially closed the gap with its #1 competitor: Amazon. What is remarkable is that Microsoft is no longer an alternative to Amazon, it is becoming a clear leader in that space. Some of the new features include official support for durable Virtual Machines with high availability (cross-geographic replication), free WebSites to try Azure, MySQL database at no charge, a new distributed low-latency cache feature, Linux support, support with existing VPN hardware for seamless on-premise integration, a new partner ecosystem and much, much more. Amazon had an edge against Windows Azure in the IaaS (Infrastructure as a Service) space, until now. With the latest release from Microsoft Azure, the gap has been filled. In fact, it seems Amazon may now have a gap to fill… This is great news to everyone; it seems that cloud offerings are becoming more standardized with the more mature cloud providers, and the management stack and quality of service of each cloud provider is increasingly becoming the differentiator. With today’s announcements, it is becoming clear that cloud providers are pushing hard to increase their service footprint and lowering typical barriers to entry such as support for open-source operating systems, free trial offers, higher availability, faster deployment times and simpler enterprise integration.

    Read the article

  • What's the simplest way to provide a portable, locally running webservice server application?

    - by derFunk
    We have a bigger website running that offers a JsonRpc web service. For offline demonstration purposes I want to realize a portable, locally running webserver with a minimalistic feature replication of the live webservice, and bundle this together with Html files which do Ajax requests to it. This local server executable should have as little dependencies as possible. It's gonna be run and presented by non-devs on non-dev Windows machines, so I would prefer having a simple executable plus the service code - whereas language doesn't matter as long as it is .NET, PHP or Java. I'll need a small database behind which probably will be Sqlite. It's important to say that for some reasons we cannot use the original web service code, but we have to rewrite it new for the local demo server, this is why I want to put minimal effort in the local server tech. An installer for distribution is not mandatory, it's okay to have a zip file with an executable in it which starts up the local webserver. What would you recommend realizing these requirements? I've done some research already, but would love to here your opinions and get some pointers!

    Read the article

  • ?12c database ????Adaptive Execution Plans ????????

    - by Liu Maclean(???)
    12c R1 ????SQL??????- Adaptive Execution Plans ????????,???????optimizer ??????(runtime)???????????????, ????????????????????? SQL???????? ????????????, ?????????????????????????????????????????????????????????????adaptive plan ????????????????????????????????????,?????subplan???????????????????? ??????, ???????? ???????????????,?????????, ?????? ???????????????”???”????, ???????????????????buffer ???????  ????????????,?????,??????????????????? ???optimizer ?????????????????????????,?????????????????????????????????????????plan???? ??12C?????????????, ???????????????????,?????? ???????????? ????????????2???: Dynamic Plans????: ???????????????????????;??????,???optimizer??????????subplans??????????????, ???????????????????,?????????????? Reoptimization????: ?Dynamic Plans????,Reoptimization??????????????????????Reoptimization??,?????????????????????????,??reoptimization????? OPTIMIZER_ADAPTIVE_REPORTING_ONLY ???? report-only????????????????TRUE,?????????report-only????,???????????????,??????????????? Dynamic Plans ??????????????,????????????????????????, ?????????????,???????????,????????????????????????????????????????? ?????????????final plan??????????????default plan, ??final plan?default plan???????,????????????? subplan ???????????????,???????????????????????? ??????,???????statistics collector ?buffer???????????statistics collector?????????????????,???????????????????????????? ?????????????????????????????????????????,??????????,?????????????? ???????????,???????buffer???? ???????????????,?????????????????????????????,??????buffer,??????final plan? ????????,???????????????????????,????????????????? ?V$SQL??????IS_RESOLVED_DYNAMIC_PLAN??????????final plan???default plan? ??????dynamic plan ???????SQL PLAN directives?????? declare cursor PLAN_DIRECTIVE_IDS is select directive_id from DBA_SQL_PLAN_DIRECTIVES; begin for z in PLAN_DIRECTIVE_IDS loop DBMS_SPD.DROP_SQL_PLAN_DIRECTIVE(z.directive_id); end loop; end; / explain plan for select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; select * from table(dbms_xplan.display()); Plan hash value: 1255158658 www.askmaclean.com ------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4 | 128 | 7 (0)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS | | 4 | 128 | 7 (0)| 00:00:01 | |* 3 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK | 1 | | 0 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION | 1 | 20 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("O"."UNIT_PRICE"=15 AND "QUANTITY">1) 4 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") alter session set events '10053 trace name context forever,level 1'; OR alter session set events 'trace[SQL_Plan_Directive] disk highest'; select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; ---------------------------------------------------------------+-----------------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | ---------------------------------------------------------------+-----------------------------------+ | 0 | SELECT STATEMENT | | | | 7 | | | 1 | HASH JOIN | | 4 | 128 | 7 | 00:00:01 | | 2 | NESTED LOOPS | | | | | | | 3 | NESTED LOOPS | | 4 | 128 | 7 | 00:00:01 | | 4 | STATISTICS COLLECTOR | | | | | | | 5 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 | 00:00:01 | | 6 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK| 1 | | 0 | | | 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | | 8 | TABLE ACCESS FULL | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | ---------------------------------------------------------------+-----------------------------------+ Predicate Information: ---------------------- 1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") 5 - filter(("O"."UNIT_PRICE"=15 AND "QUANTITY">1)) 6 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") ===================================== SPD: BEGIN context at statement level ===================================== Stmt: ******* UNPARSED QUERY IS ******* SELECT /*+ OPT_ESTIMATE (@"SEL$1" JOIN ("P"@"SEL$1" "O"@"SEL$1") ROWS=13.000000 ) OPT_ESTIMATE (@"SEL$1" TABLE "O"@"SEL$1" ROWS=13.000000 ) */ "P"."PRODUCT_NAME" "PRODUCT_NAME" FROM "OE"."ORDER_ITEMS" "O","OE"."PRODUCT_INFORMATION" "P" WHERE "O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1 AND "P"."PRODUCT_ID"="O"."PRODUCT_ID" Objects referenced in the statement PRODUCT_INFORMATION[P] 92194, type = 1 ORDER_ITEMS[O] 92197, type = 1 Objects in the hash table Hash table Object 92197, type = 1, ownerid = 6573730143572393221: No Dynamic Sampling Directives for the object Hash table Object 92194, type = 1, ownerid = 17822962561575639002: No Dynamic Sampling Directives for the object Return code in qosdInitDirCtx: ENBLD =================================== SPD: END context at statement level =================================== ======================================= SPD: BEGIN context at query block level ======================================= Query Block SEL$1 (#0) Return code in qosdSetupDirCtx4QB: NOCTX ===================================== SPD: END context at query block level ===================================== SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Inserted felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: qosdCreateFindingSingTab retCode = CREATED, fid = 2896834833840853267 SPD: qosdCreateDirCmp retCode = CREATED, fid = 2896834833840853267 SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SKIP_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Modified felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 5618517328604016300 SPD: Modified felem, fid=5618517328604016300, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 1142802697078608149 SPD: Modified felem, fid=1142802697078608149, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 2, objcnt = 2, obItr = 0, objid = 92194, objtyp = 1, vecsize = 0, obItr = 1, objid = 92197, objtyp = 1, vecsize = 0, fid = 1437680122701058051 SPD: Modified felem, fid=1437680122701058051, ftype = 1, freason = 2, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO select * from table(dbms_xplan.display_cursor(format=>'report')) ; ????report????adaptive plan Adaptive plan: ------------- This cursor has an adaptive plan, but adaptive plans are enabled for reporting mode only.  The plan that would be executed if adaptive plans were enabled is displayed below. ------------------------------------------------------------------------------------------ | Id  | Operation          | Name                | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------ |   0 | SELECT STATEMENT   |                     |       |       |     7 (100)|          | |*  1 |  HASH JOIN         |                     |     4 |   128 |     7   (0)| 00:00:01 | |*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |     4 |    48 |     3   (0)| 00:00:01 | |   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |     1 |    20 |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------ SQL> select SQL_ID,IS_RESOLVED_DYNAMIC_PLAN,sql_text from v$SQL WHERE SQL_TEXT like '%MALCEAN%' and sql_text not like '%like%'; SQL_ID IS -------------------------- -- SQL_TEXT -------------------------------------------------------------------------------- 6ydj1bn1bng17 Y select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id ???? explain plan for ????default plan, ??????optimizer???final plan,??V$SQL.IS_RESOLVED_DYNAMIC_PLAN???Y,????????????? DBA_SQL_PLAN_DIRECTIVES?????????????SQL PLAN DIRECTIVES, ???12c? ???MMON?????DML ???column usage??????????,????SMON??? MMON????SGA??PLAN DIRECTIVES??? ?????DBMS_SPD.flush_sql_plan_directive???? select directive_id,type,reason from DBA_SQL_PLAN_DIRECTIVES / DIRECTIVE_ID TYPE REASON ----------------------------------- -------------------------------- ----------------------------- 10321283028317893030 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 4757086536465754886 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 16085268038103121260 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE SQL> set pages 9999 SQL> set lines 300 SQL> col state format a5 SQL> col subobject_name format a11 SQL> col col_name format a11 SQL> col object_name format a13 SQL> select d.directive_id, o.object_type, o.object_name, o.subobject_name col_name, d.type, d.state, d.reason 2 from dba_sql_plan_directives d, dba_sql_plan_dir_objects o 3 where d.DIRECTIVE_ID=o.DIRECTIVE_ID 4 and o.object_name in ('ORDER_ITEMS') 5 order by d.directive_id; DIRECTIVE_ID OBJECT_TYPE OBJECT_NAME COL_NAME TYPE STATE REASON ------------ ------------ ------------- ----------- -------------------------------- ----- ------------------------------------- --- 1.8156E+19 COLUMN ORDER_ITEMS UNIT_PRICE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 TABLE ORDER_ITEMS DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 COLUMN ORDER_ITEMS QUANTITY DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE DBA_SQL_PLAN_DIRECTIVES????? _BASE_OPT_DIRECTIVE ? _BASE_OPT_FINDING SELECT d.dir_own#, d.dir_id, d.f_id, decode(type, 1, 'DYNAMIC_SAMPLING', 'UNKNOWN'), decode(state, 1, 'NEW', 2, 'MISSING_STATS', 3, 'HAS_STATS', 4, 'CANDIDATE', 5, 'PERMANENT', 6, 'DISABLED', 'UNKNOWN'), decode(bitand(flags, 1), 1, 'YES', 'NO'), cast(d.created as timestamp), cast(d.last_modified as timestamp), -- Please see QOSD_DAYS_TO_UPDATE and QOSD_PLUS_SECONDS for more details -- about 6.5 cast(d.last_used as timestamp) - NUMTODSINTERVAL(6.5, 'day') FROM sys.opt_directive$ d ??dbms_spd??? SQL PLAN DIRECTIVES, SQL PLAN DIRECTIVES???retention ???53?: Package: DBMS_SPD This package provides subprograms for managing Sql Plan Directives(SPD). SPD are objects generated automatically by Oracle server. For example, if server detects that the single table cardinality estimated by optimizer is off from the actual number of rows returned when accessing the table, it will automatically create a directive to do dynamic sampling for the table. When any Sql statement referencing the table is compiled, optimizer will perform dynamic sampling for the table to get more accurate estimate. Notes: DBMSL_SPD is a invoker-rights package. The invoker requires ADMINISTER SQL MANAGEMENT OBJECT privilege for executing most of the subprograms of this package. Also the subprograms commit the current transaction (if any), perform the operation and commit it again. DBA view dba_sql_plan_directives shows all the directives created in the system and the view dba_sql_plan_dir_objects displays the objects that are included in the directives. -- Default value for SPD_RETENTION_WEEKS SPD_RETENTION_WEEKS_DEFAULT CONSTANT varchar2(4) := '53'; | STATE : NEW : Newly created directive. | : MISSING_STATS : The directive objects do not | have relevant stats. | : HAS_STATS : The objects have stats. | : PERMANENT : A permanent directive. Server | evaluated effectiveness and these | directives are useful. | | AUTO_DROP : YES : Directive will be dropped | automatically if not | used for SPD_RETENTION_WEEKS. | This is the default behavior. | NO : Directive will not be dropped | automatically. Procedure: flush_sql_plan_directive This procedure allows manually flushing the Sql Plan directives that are automatically recorded in SGA memory while executing sql statements. The information recorded in SGA are periodically flushed by oracle background processes. This procedure just provides a way to flush the information manually. ????”_optimizer_dynamic_plans”(enable dynamic plans)????????,???TRUE??DYNAMIC PLAN? ???FALSE???????????? ????,Dynamic Plan????????????Nested Loop?Hash Join???case ,????????Nested loop???????????HASH JOIN,?HASH JOIN????????????????? ????????subplan?????,???? pass?? ?join method???,?????STATISTICS COLLECTOR???cardinality?,???????HASH JOIN?????Nested Loop,????????????subplan?????access path; ???????Sales??????????????????,????HASH JOIN,??SUBPLAN??customers?????????;?????Nested Loop,???????cust_id?????Range Scan+Access by Rowid? Cardinality feedback Cardinality feedback????????11.2????,????????re-optimization???;  ???????????,Cardinality feedback?????????????????????????? ???????????????????,?????????????????,??????????Cardinality feedback????????????? ????????????????????????? ??????????????Cardinality feedback ??: ????????,???????????,??????????,????????????????selectivity ??? ????????????: ??????,?????????????????????????????????,??????????????????? ????????????????????????????????????????,?????????????????????????? ?????????,???????????????,?????????? ??????????Cardinality ????,??????join Cardinality ????????? Cardinality feedback???????cursor?,?Cursor???aged out????? SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ---------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem | ---------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | 20 | | | | |* 1 | HASH JOIN | | 1 | 4 | 13 |00:00:00.01 | 24 | 20 | 2061K| 2061K| 429K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 4 | 13 |00:00:00.01 | 7 | 6 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 1 | 288 |00:00:00.01 | 17 | 14 | | | | ---------------------------------------------------------------------------------------------------------------------------------------- SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | | | | |* 1 | HASH JOIN | | 1 | 13 | 13 |00:00:00.01 | 24 | 2061K| 2061K| 413K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 13 | 13 |00:00:00.01 | 7 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 288 | 288 |00:00:00.01 | 17 | | | | ------------------------------------------------------------------------------------------------------------------------------- Note ----- - statistics feedback used for this statement SQL> select count(*) from v$SQL where SQL_ID='cz0hg2zkvd10y'; COUNT(*) ---------- 2 SQL>select sql_ID,USE_FEEDBACK_STATS FROM V$SQL_SHARED_CURSOR where USE_FEEDBACK_STATS ='Y'; SQL_ID U ------------- - cz0hg2zkvd10y Y ????????Cardinality feedback????,???????????????????????????,????????????order_items???????? ????2??????plan hash value??(??????????),?????2????child cursor??????gather_plan_statistics???actual : A-ROWS  estimate :E-ROWS????????? Automatic Re-optimization ???dynamic plan, Re-optimization???????????????  ?  ??????????????? ????????????????????????????????  ???????????,??????????????, ???????????????????? ???????????  Re-optimization??, ????????????????????? Re-optimization????dynamic plan??????????  dynamic plan????????????????????, ???????????????????? ????,??????????join order ??????????????,?????????????join order????? ??????,????????Re-optimization, ??Re-optimization ??????????????????? ?Oracle database 12c?,join statistics?????????????????????,??????????????????????Re-optimization???????????adaptive cursor sharing????? ????????????????,???????????? ????? ???????statistics collectors ????????????????????Re-optimization??????2?????????????,???????????????? ??????????????Re-optimization?????,?????????????????????? ???v$SQL??????IS_REOPTIMIZABLE?????????????????????Re-optimization,??????????Re-optimization???,?????Re-optimization ,???????reporting????? IS_REOPTIMIZABLE VARCHAR2(1) This columns shows whether the next execution matching this child cursor will trigger a reoptimization. The values are:   Y: If the next execution will trigger a reoptimization R: If the child cursor contains reoptimization information, but will not trigger reoptimization because the cursor was compiled in reporting mode N: If the child cursor has no reoptimization information ??1: select plan_table_output from table (dbms_xplan.display_cursor('gwf99gfnm0t7g',NULL,'ALLSTATS LAST')); SQL_ID  gwf99gfnm0t7g, child number 0 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 1906736282 ------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation             | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT      |                     |      1 |        |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   1 |  NESTED LOOPS         |                     |      1 |      1 |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   2 |   MERGE JOIN CARTESIAN|                     |      1 |      4 |   9135 |00:00:00.02 |      34 |     15 |       |       |          | |*  3 |    TABLE ACCESS FULL  | PRODUCT_INFORMATION |      1 |      1 |     87 |00:00:00.01 |      33 |     14 |       |       |          | |   4 |    BUFFER SORT        |                     |     87 |    105 |   9135 |00:00:00.01 |       1 |      1 |  4096 |  4096 | 4096  (0)| |   5 |     INDEX FULL SCAN   | ORDER_PK            |      1 |    105 |    105 |00:00:00.01 |       1 |      1 |       |       |          | |*  6 |   INDEX UNIQUE SCAN   | ORDER_ITEMS_UK      |   9135 |      1 |    269 |00:00:00.01 |    1302 |      3 |       |       |          | ------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    6 - access("O"."ORDER_ID"="ORDER_ID" AND "P"."PRODUCT_ID"="O"."PRODUCT_ID") SQL_ID  gwf99gfnm0t7g, child number 1 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 35479787 -------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation              | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | -------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT       |                     |      1 |        |    269 |00:00:00.01 |      63 |      3 |       |       |          | |   1 |  NESTED LOOPS          |                     |      1 |    269 |    269 |00:00:00.01 |      63 |      3 |       |       |          | |*  2 |   HASH JOIN            |                     |      1 |    313 |    269 |00:00:00.01 |      42 |      3 |  1321K|  1321K| 1234K (0)| |*  3 |    TABLE ACCESS FULL   | PRODUCT_INFORMATION |      1 |     87 |     87 |00:00:00.01 |      16 |      0 |       |       |          | |   4 |    INDEX FAST FULL SCAN| ORDER_ITEMS_UK      |      1 |    665 |    665 |00:00:00.01 |      26 |      3 |       |       |          | |*  5 |   INDEX UNIQUE SCAN    | ORDER_PK            |    269 |      1 |    269 |00:00:00.01 |      21 |      0 |       |       |          | -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    5 - access("O"."ORDER_ID"="ORDER_ID") Note -----    - statistics feedback used for this statement    SQL> select IS_REOPTIMIZABLE,child_number FROM V$SQL  A where A.SQL_ID='gwf99gfnm0t7g'; IS CHILD_NUMBER -- ------------ Y             0 N             1    1* select child_number,other_xml From v$SQL_PLAN  where SQL_ID='gwf99gfnm0t7g' and other_xml is not nul SQL> / CHILD_NUMBER OTHER_XML ------------ --------------------------------------------------------------------------------            1 <other_xml><info type="cardinality_feedback">yes</info><info type="db_version">1              2.1.0.1</info><info type="parse_schema"><![CDATA["OE"]]></info><info type="plan_              hash">35479787</info><info type="plan_hash_2">3382491761</info><outline_data><hi              nt><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint><hint><![CDATA[OPTIMIZER_FEATUR              ES_ENABLE('12.1.0.1')]]></hint><hint><![CDATA[DB_VERSION('12.1.0.1')]]></hint><h              int><![CDATA[ALL_ROWS]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></              hint><hint><![CDATA[MERGE(@"SEL$2")]]></hint><hint><![CDATA[OUTLINE(@"SEL$1")]]>              </hint><hint><![CDATA[OUTLINE(@"SEL$2")]]></hint><hint><![CDATA[FULL(@"SEL$F5BB7              4E1" "P"@"SEL$2")]]></hint><hint><![CDATA[INDEX_FFS(@"SEL$F5BB74E1" "O"@"SEL$2"              ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PRODUCT_ID"))]]></hint><hint><![CDATA[I              NDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA[              LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$2" "O"@"SEL$1")]]></hint><hint><![C              DATA[USE_HASH(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint><hint><![CDATA[USE_NL(@"SEL$              F5BB74E1" "O"@"SEL$1")]]></hint></outline_data></other_xml>            0 <other_xml><info type="db_version">12.1.0.1</info><info type="parse_schema"><![C              DATA["OE"]]></info><info type="plan_hash">1906736282</info><info type="plan_hash              _2">2579473118</info><outline_data><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]>              </hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('12.1.0.1')]]></hint><hint><![CD              ATA[DB_VERSION('12.1.0.1')]]></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><![CD              ATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></hint><hint><![CDATA[MERGE(@"SEL$2")]]></hi              nt><hint><![CDATA[OUTLINE(@"SEL$1")]]></hint><hint><![CDATA[OUTLINE(@"SEL$2")]]>              </hint><hint><![CDATA[FULL(@"SEL$F5BB74E1" "P"@"SEL$2")]]></hint><hint><![CDATA[              INDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA              [INDEX(@"SEL$F5BB74E1" "O"@"SEL$2" ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PROD              UCT_ID"))]]></hint><hint><![CDATA[LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$1              " "O"@"SEL$2")]]></hint><hint><![CDATA[USE_MERGE_CARTESIAN(@"SEL$F5BB74E1" "O"@"              SEL$1")]]></hint><hint><![CDATA[USE_NL(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint></o              utline_data></other_xml> ??2: SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 0 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | 14 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 8 | 29 |00:00:00.01 | 17 | 14 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OWNER OBJECT_NAME COL_NAME OBJECT TYPE STATE REASON ----------------------- ----- ------------- ----------- ------ ---------------- ----- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; ELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 1 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 29 | 29 |00:00:00.01 | 17 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) Note ----- - cardinality feedback used for this statement SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' b74nw722wjvy3 1 select /*+g N ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' SELECT /*+gather_plan_statistics*/ CUST_EMAIL FROM CUSTOMERS WHERE CUST_STATE_PROVINCE='MA' AND COUNTRY_ID='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID 3tk6hj3nkcs2u, child number 0 ------------------------------------- Select /*+gather_plan_statistics*/ cust_email From customers Where cust_state_province='MA' And country_id='US' Plan hash value: 1683234692 ------------------------------------------------------------------------------- |Id | Operation | Name | Starts|E-Rows|A-Rows| A-Time |Buffers| ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01| 16 | |*1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 2| 2 |00:00:00.01| 16 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='MA' AND "COUNTRY_ID"='US')) Note ----- - dynamic sampling used for this statement (level=2) - 1 Sql Plan Directive used for this statement EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OW OBJECT_NA COL_NAME OBJECT TYPE STATE REASON ------------------- -- --------- ---------- ------- --------------- ------------- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE

    Read the article

  • CodePlex Daily Summary for Tuesday, March 08, 2011

    CodePlex Daily Summary for Tuesday, March 08, 2011Popular ReleasesAjax Minifier: AjaxMin 4.14: Fixed issue with CSS3 @media and @page parsing. Added support for more properties in the MSBuild task.DotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Master Data Services Manager: stable 1.0.3: Update 2011-03-07 : bug fixes added external configuration File : configuration.config added TreeView Display of model (still in dev) http://img96.imageshack.us/img96/5067/screenshot073l.jpg added Connection Parameters (username, domain, password, stored encrypted in configuration file) http://img402.imageshack.us/img402/5350/screenshot072qc.jpgSharePoint Content Inventory: Release 1.1: Release 1.1Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.4 Beta: - Moved the core of the PopupMenu class to the new PopupMenuBase class. - Renamed the MenuTriggerElement class to MenuTriggerRelationship. - Renamed the ApplicationMenus property to MenuTriggers. - Renamed the _allowPinnedState property to AllowPinnedState. - Renamed the _restoreFocusOnClose property to RestoreFocusOnClose. - Renamed the SubmenuLaunchKey property to FlyoutKey. - Renamed the AutoMapTriggerElementToSelectableItem property to UseSelectedItemAsTriggerElement. - Renamed the AutoM...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added fullscreen for the popup and popupformIronPython: 2.7 Release Candidate 2: On behalf of the IronPython team, I am pleased to announce IronPython 2.7 Release Candidate 2. The releases contains a few minor bug fixes, including a working webbrowser module. Please see the release notes for 61395 for what was fixed in previous releases.LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.IIS Tuner: IIS Tuner 1.0: IIS and ASP.NET performance optimization toolMinemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.CRM 2011 OData Query Designer: CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. This tool allows you to build OData queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the query and view the ATOM and JSON data returned. The look and feel of this component will improve and new functionality will be added in the near future so please provide feedback on your experience. Latest Update 8th March ...Sandcastle Help File Builder: SHFB v1.9.3.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. This release uses the Sandcastle Guided Installation package used by Sandcastle Styles. Download and extract to a folder and then run SandcastleI...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...DirectQ: Release 1.8.7 (RC1): Release candidate 1 of 1.8.7GoogleTrail: TrailMap Beta 1: Trailmap beta 1 release Now we have updated custom map builder. Now we have complete gpx file editor. Now we have elevation data update service for any gpx file. (currently supports only google only).ASP.NET: Sprite and Image Optimization Preview 3: The ASP.NET Sprite and Image Optimization framework is designed to decrease the amount of time required to request and display a page from a web server by performing a variety of optimizations on the page’s images. This is the third preview of the feature and works with ASP.NET Web Forms 4, ASP.NET MVC 3, and ASP.NET Web Pages (Razor) projects. The binaries are also available via NuGet: AspNetSprites-Core AspNetSprites-WebFormsControl AspNetSprites-MvcAndRazorHelper It includes the foll...New ProjectsCaxangáV2: Ainda jogando Caxangá.. ou nãoCollection Membership Wizard for SCCM: This application assists SMS 2003 and SCCM 2007 administrators with the everyday task of adding lists of computers or users by name to collections in their hierarchy. The goal of this application is to be very easy to use and to have good performance.CoseaDirectos: Aplicación asp.net, silverlight, sql server 2008 para reclutamiento de personal DoctrineIgniter: Usage of Doctrine 2 ORM and Codeigniter 2 PHP platform.Field Finder: DB Administration tool. Easy search in the SQL server database structure. Provides predefined templates for SQL Scripts and VB.NET source code. Search in database Structure and database Data. First Project Of Skyline's Member: multipoint mouse on C#FloodWarn: A series of server and client apps for monitoring flood levels on the Snoqualmie River in King County, Washington.Fluent Json: Json generator and parser written in C#. Besides basic json support, this library enables you to fluently map your custom types to the json data format.Google Handy Translator: Handy dictionary which use Google Translator and also local database . It will activate by SHIFT+F10 and translate what we have in the clipboard.Grid Model: Extensions to the Task Parallel Library to support distributing tasks across multiple computers participating in a heterogeneous computing grid.HiShow: An ASP.NET website allows everybody too have an overview of many hitech-products: Images, price, and reviewIdeaBlade DevForce/Caliburn Application Framework: The IdeaBlade DevForce/Caliburn Application Framework makes it easy to get started with developing data driven Rich Internet Applications in Silverlight and WPF desktop applications with Caliburn Micro as the MVVM framework and DevForce 2010 as the data access layer. Implement DAO By IBatis and NHibernate: ????????,??IBatis?NHibernate???????,?????????,???????????。iTunes.Scrubber: iTunes.Scrubber is an Open Source library that allows people to easily update metadata for their iTunes libraries. It's developed in C#, and interfaces with various web services, including imdb, and thetvdbLaptop Rental System: This project is for Laptop Rental Software project. It will lasts for 2 weeks. Xuan ChienMathLib.NET: Aims to provide a fully managed implementation of core MATLAB(R) functions, designed to be used from dynamic languages such as IronPython and providing an API matching the MATLAB(R) API, to ease the transition from analysis to implementation.Mobile Application Development Framework: A general purpose Windows CE/Mobile Application Development FrameworkMSMSpec: MSMSpec autogenerates MSTest tests corresponding to MSpec tests. Useful where MSTest integration is desired / required / forced. Also enables using VS test tooling for MSpec tests. Requires VS2010 and .NET 4.0.Multi-touch GIS API for TableTops: This API facilitates the creation of multi-touch GIS applications for digital Tabletops. It is built on top of ESRI API for WPF 4.0 and it uses Windows 7 touch events. It also uses some gestures from the Gesture Toolkit.NewLineReplacer: Replace letter fast and easy in great textfilesOpenGLMaciejLis: Project is a game prototype created in C++ and OpenGLPlanetQuest: Application to poll the current extra-solar planet count at http://planetquest.jpl.nasa.gov/ Prime number exporter: Prime number exporter calculates prime numbers using the "Sieve of Eratosthenes" and exports a Textfile. QPAPrintLib: Print every document by its recommended programmRegistry Editor for Windows Mobile: A registry editor for Windows Mobile 6.x and 5.x based devices.Remote desktop on mobile phones: Mobile Remote Desktop enables you to connect to your computer from your mobile devices using bluetooth connectivity. Once connected, Mobile Remote Desktop gives you mouse and keyboard control of your computer from mobile.RestUpProxy: RestUpProxy is a .Net REST client designed to make using RESTful APIs a snap.SharePoint 2010 List Based 404 Handler: A SharePoint WSP that customises the 404 handler for a web application, allowing you to define how to handle missing page requests via a SharePoint list. This is the SharePoint 2010 version.SharePoint Content Inventory: SPContentInventory generates a complete content invetory for SharePoint 2007/2010 sites. The content inventory is exported as an Excel file providing information about all sites, lists and libraries.Shop: open source ecommerce solution for umbraco.SilverVision: Computer vision algorithms implementation in SilverlightSpCop: The aim of this project is to offer a utility similar to fxcop but for wsp packages. At the end it should contain enough rules to ensure good practices and allow automated audits or checks at build time for example.Syscable: Sistema para control de mensualidades para una empresa que proporcione servicios de television por cableSyscart: Aplicacion web para manejo de inventario en bodegas u otros establecimientos. Tranquility.Net (Wcf App Server): Allows developers to host multiple isolated Wcf services within a single Windows service. You'll no longer have to use IIS to host all your services. It's developed in C# .NET. Your services with a smile.USMC Knowledge for WP7: USMC Knowledge is an information application to provide active duty Marines, as well as those with an interest in the USMC with basic knowledge. It's developed in Silverlight for Windows Phone 7.Utility4Net: some base class such as xml,string,data,secerity,web,office... etc.. under Microsoft.NET Framework 4.0 by c# Part of the code r collected from the Internet WPF ImageUitls: WPF Image Utils is a set of image related applications that use WPF. Currently the project focuses on a picture viewerZen4Sync, Orchestration and Test Load platform for SQL Server Merge Replication: This project is all about providing a orchestration and test load platform able to validate any SQL Server Merge Replication based Architecture.Zimms: Collaboration Site for friends, a code depot, and scratch pad??tbl??????: ????tbl?????????。 ??tbl??????,??????????,?????、excel???。 ?????????????,????,????????!

    Read the article

  • Outlook 2011 Contact Import from CSV with Notes containing new lines / cr / lf

    - by Paul Hargreaves
    I'm trying to import several thousand contacts into Outlook 2011 for Mac. Everything is working well except the Notes field as I cannot figure out how to get new lines / carriage returns into it. There is no documentation for the exact format that Outlook supports. After searching the web and experimenting I have tried: Creating a single contact in Outlook with Notes containing several lines of text. I then export the contact to a csv, deleting the contact in Outlook, then re-import. All lines in Notes merge together :-/ Following tips I found such as containing new lines around quotes. e.g. http://creativyst.com/Doc/Articles/CSV/CSV01.htm (search for line-break) Switching the CSV format from DOS to Unix, experimenting using manually injected ctrl-characters such as ^M etc. I would include an example export/import but unfortunately the the new breaks included do not work well in a SU code block.

    Read the article

  • Networking with Windows 7

    - by Alix Axel
    I got several computers connected to my home wireless network and I want to make use of some of the features of Windows 7 for home networks but I can't seem to get them working: How do I keep files and folders in sync between specific computers? I'm not talking about Live Sync. How can I stream to Windows Media Player from another computer? I would appreciate if someone could provide me some links / solutions that address my needs. Thanks! To anyone who thinks this question is a duplicate and wants to close it please bare in mind the following: I'm not looking for additional software, I know I can use Live Sync, Dropbox and so on but I'm asking this: how do I configure Windows 7 to sync files between my home network - no Internet required! This has something to do with shared folders and offline files in Windows 7, but I can't get it to work. PS: Please merge with this question: http://superuser.com/questions/139763/networking-with-windows-7

    Read the article

  • SGE: invoking qmake raises "critical error: can't resolve group"

    - by Pierre
    I'm new to SGE an I'm trying to run qmake with the simple following Makefile with our very new cluster: merge.txt: job1.txt job2.txt job3.txt ... cat $^ > $@ job1.txt: sleep 1 echo "Hello From " $@ > $@ sleep 1 job2.txt: sleep 2 echo "Hello From " $@ > $@ sleep 2 job3.txt: (...) the following command raises an error: qmake -l arch=lx24-amd64 -cwd -v PATH -- -j 4 sleep 1 dynamic mode sleep 2 dynamic mode sleep 3 dynamic mode sleep 4 dynamic mode critical error: can't resolve group qmake: *** [job3.txt] Error 1 critical error: can't resolve group qmake: *** [job2.txt] Error 1 critical error: can't resolve group qmake: *** [job1.txt] Error 1 critical error: can't resolve group what's wrong ?

    Read the article

  • Samsung 530U3C clean system installation

    - by user1292810
    I have bought Samsung 530U3C notebook with i5-3317/12GB RAM/500GB HDD/24GB ExpressCache/Windows 8 preinstalled. I would like to install my own clean version of Win8. But I would like to preserve recovery partition. I will sell this laptop in probably 1-1.5 year and I would like to be able to restore settings to factory ones. What is more, in the future I would like to install Ubuntu as well, but Windows goes first. At the moment discs and partitions looks like below (sorry for non English screen): http://i.stack.imgur.com/p4W6t.jpg DISC 0: 500MB - recovery partition 300MB - system partition 442.22GB - Drive C: - Boot, Page File, Crash Dump, Primary partition 21.64GB - recovery partition 1.00GB - recovery partition DISC 1: 9.00GB - primary partition 13.24GB - primary partition Which partitions can I format and which of them should I preserve? Can I format and merge partitions from that 24GB ExpressCache and install Windows on it?

    Read the article

  • Perforce Restore From Multiple Checkpoint Files?

    - by AJ
    Hi all, I am working with a very large (~11GB) checkpoint file and trying to do a -jr (journal restore) operation. About half way through the file, I'm hitting an entry which causes an error to occur. I'm unable to come up with a conventional way to print, edit, and save changes to the offending line. So right now I'm splitting the checkpoint into files of 500k lines each...up to 47 files and counting. My question is, once I have these separate files: Can I run journal restore on each one separately to check for errors? Once fixed, is it necessary to merge them back together again to do my full journal restore? Any other ideas on how to tackle this problem would be appreciated. Thanks in advance, -aj

    Read the article

  • mac osX file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • Address Label Printing without Windows Address Book

    - by Jim Fell
    In the past I've maintained my address book using the built-in Address Book utility that came with Windows. Once each year, I would import my Address Book (.WAB) file into Outlook 2003/XP. (I don't use Outlook for email.) Then I would use the Mail Merge feature in Word 2003/XP to make and print address labels on standard Avery label sheets to simplify address of my Christmas cards. Since I'm now using Windows 7, and the familiar Address Book utility is no longer available, how can I print my address labels? I have both Windows Address Book (.WAB) and Comma-Separated Values (.CSV) files that contain my address book data. So, I guess I need to know two things: Which program or utility (preferably free) can I use to print my address labels. How do I import my address data into that program? If it helps, I am already a user of Gmail and Google Drive. Thanks.

    Read the article

  • How to recover bitlocker encrypted partition that is now 'unallocated'/'free space'?

    - by Atishay Jain
    My hard drive had 5 partitions(including 1(some 4-5GB) bit locker encrypted one). When I used disk mgmt I could view 2 partitions(24.4GB and 8.94GB) in green colour labeled Empty space. So, I wanted to merge them and I used minitool partition wizard for the purpose. I don't know, what that software did, but all I was left with 2 partitions and lots of green free space. I recovered 2 partitions using EaseUS partition master, but the bitlocker encrypted partition cannot be searched by it(and also minitool partition recovery). Now, the disk mgmt shows 2 free space partitions of 28.36GB and 8.94GB respectively. Here is a screenshot http://s14.postimage.org/4tvij041t/Screen_Shot003.jpg Please, tell me a way to recover the bitlocker encrypted partition that is showing as a free space in disk management. P.S. - It contains very important data.

    Read the article

  • Merging partitions and removing windows of one partition

    - by SmartLemon
    I have two partitions on my laptop, I created a new one when installing windows 8 pro because windows 7 wouldn't upgrade to it for some odd reason. The main partition, which has 631 GB ( has windows 7 installed on it, and the second partition is 49.9 GB and has windows 8 installed on it. What I need to do is remove windows 7 from the other one (Yeah its dual booting), make it so it boots straight into windows 8, without showing the dual boot screen, and also merge the two drives together. Only problem is, I have no idea how to do this. Please don't use complete lamens terms, I am a software developer so I know at least a bit about computers. Here's disk management so you can see how its set out.

    Read the article

  • Software for mosaicing video frames into a panorama

    - by Eikern
    I have some video footage I've shot using a dolly with the camera rotated 90 degrees to the right. Which gives me a sideways tracking shot of a background. Does there exist some kind of software I can create a single image from the video footage? The result I want is one single image of the entire shot. I guess I could export every Nth frame and use Photoshop (or any other type of panorama software) to merge the images together, but this would make it easier. Thanks.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >