Search Results

Search found 318 results on 13 pages for 'quest'.

Page 9/13 | < Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >

  • How to pass PHP array parameter to Javascript Function???

    - by Son of Man
    index.php <script type="text/javascript" src="javascript.js"> </script> <?php $movies = array("Bloodsport", "Kickboxer", "Cyborg", "Timecop", "Universal Soldier", "In Hell", "The Quest"); ?> <input type="submit" value="Test Javascript" onclick="showMovies(<?php echo $movies; ?>);" /> javascript.js function showMovies(movies) { alert(movies.length); return false; } I am new to programming so Im having hard time fixing this one which is obviously simple for you guys. When I hit the submit button it says the that the array size is 1 which I think should be 7. How could this be??? Pls tell me what to do... Thanks in Advance...

    Read the article

  • Linux How to print all the files with the same prefix after searching for them?

    - by Alyx
    I need to search through a directory which contains many sub directories, each which contain files. The files read as follows question1234_01, where 1234 are random digits and the suffix _01 is the number of messages that contain the prefix, meaning they are apart of the same continuing thread. find . -name 'quest*' | cut -d_ -f1 | awk '{print $1}' | uniq -c | sort -n example output: 1 quest1234 10 quest1523 This searches for all the files then sorts them in order. What I want to do is print all the files which end up having the most occurrences, in my example the one with 10 matches. So it should only output quest1523_01 - 11

    Read the article

  • How to keep a floating div centered on window resize (jQuery/CSS)

    - by Jimbo
    Is there a way (without binding to the window.resize event) to force a floating DIV to re-center itself when the browser window is resized? To help explain, I imagine the pseudocode would look something like: div.left = 50% - (div.width / 2) div.top = 50% - (div.height / 2) UPDATE My query having been answered below, I wanted to post the final outcome of my quest - a jQuery extension method allowing you to center any block element - hope it helps someone else too. jQuery.fn.center = function() { var container = $(window); var top = -this.height() / 2; var left = -this.width() / 2; return this.css('position', 'absolute').css({ 'margin-left': left + 'px', 'margin-top': top + 'px', 'left': '50%', 'top': '50%' }); } Usage: $('#mydiv').center();

    Read the article

  • Insert multiple attributes into html tag using JavaScript and not between tags

    - by WilliamK
    From a previous quest we asked about inserting a new attribute into a html tag and the code below does the job nicely, but how should it be coded to add multiple attributes, for example changing.. <body bgcolor="#DDDDDD"> to... <body bgcolor="#DDDDDD" topmargin="0" leftmargin="0"> The code that works for a single attribute is... document.getElementsByTagName("body")[0].setAttribute("id", "something"); How to modify this for inserting multiple attributes?

    Read the article

  • Game Design Dilemma

    - by Chris Williams
    I'm working on a 2d tilemapped RPG. I've actually made quite a fair amount of progress, but I'm at a point where I need to make a UI decision. I have the overland world completely mapped out, and I have several towns and special areas. I'm on the fence about how to integrate the two. Scenario 1: I have one ginormous map, where everything is the same scale. This means you can walk in and out of towns without having to load or wait and transition in any way. With everything the same scale, movement costs the same no matter where you are (in terms of time/turns/energy/hunger/whatever/etc...)  The potential downside to this is that it could take quite a long time to get anywhere on foot. Scenario 2: I have an overland map, a set of town maps, overland tactical maps, dungeon maps & special area maps. The overland map is at a different scale than the other maps. This means that time/turns/energy/hunger/whatever/etc is calculated at a different rate than on the other maps, which have a 1:1 scale. When entering a town, dungeon, special area or having a random encounter, you would effectively zoom in from the overland scale to the tactical scale. When you are done with combat, or exit a dungeon or town, it would zoom back out to the overland map. The downside to this is that at the zoomed out scale, the overland map isn't all that big (comparitively) and you can traverse it fairly quickly (in real time, not game world time.) Options: 1) Go with scenario 1, as is. 2) Go with scenario 1 and introduce a slightly speedier version of overland travel, such as a horse. 3) Go with scenario 1 and introduce "instant" travel, via portals or some kind of "click the big map" mechanism. This would only work with places you've already been, or somehow unlocked (perhaps via a quest.) 4) Go with Scenario 2, as is.   Thoughts, opinions, suggestions?  Feedback appreciated.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Java Spotlight Episode 139: Mark Heckler and José Pereda on JES based Energy Monitoring @MkHeck @JPeredaDnr

    - by Roger Brinkley
    Interview with Mark Heckler and José Pereda on using JavaSE Embedded with the Java Embedded Suite on a RaspberryPI along with a JavaFX client to monitor an energy production system and their JavaOne Tutorial- Java Embedded EXTREME MASHUPS: Building self-powering sensor nets for the IoT Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Java Virtual Developer Day Session Videos Available JavaFX Maven Plugin 2.0 Released JavaFX Scene Builder 1.1 build b28 FXForm 2 release 0.2.2 OpenJDK8/Zero cross compile build for Foundation model HSAIL-based GPU offload: the Quest for Java Performance Begins Progress on Moving to Gradle Java EE 7 Launch Keynote Replay Java EE 7 Technical Breakouts Replay Java EE 7 support in NetBeans 7.3.1 Java EE 7 support in Eclipse 4.3 Java Magazine - May/June Events Jul 16-19, Uberconf, Denver, USA Jul 22-24, JavaOne Shanghai, China Jul 29-31, JVM Language Summit, Santa Clara Sep 11-12, JavaZone, Oslo, Norway Sep 19-20, Strange Loop, St. Louis Sep 22-26 JavaOne San Francisco 2013, USA Feature Interview Mark Heckler is an Oracle Corporation Java/Middleware/Core Tech Engineer with development experience in numerous environments. He has worked for and with key players in the manufacturing, emerging markets, retail, medical, telecom, and financial industries to develop and deliver critical capabilities on time and on budget. Currently, he works primarily with large government customers using Java throughout the stack and across the enterprise. He also participates in open-source development at every opportunity, being a JFXtras project committer and developer of DialogFX, MonologFX, and various other projects. When Mark isn't working with Java, he enjoys writing about his experiences at the Java Jungle website (https://blogs.oracle.com/javajungle/) and on Twitter (@MkHeck). José Pereda is a Structural Engineer working in the School of Engineers in the University of Valladolid in Spain for more than 15 years, and his passion is related to applying programming to solve real problems. Being involved with Java since 1999, José shares his time between JavaFX and the Embedded world, developing commercial applications and open source projects (https://github.com/jperedadnr), and blogging (http://jperedadnr.blogspot.com.es/) or tweeting (@JPeredaDnr) of both. What’s Cool AquaFX 0.1 - Mac OS X skin for JavaFX by Claudine Zillmann DromblerFX adds a docking framework Part 2 of Gerrit’s taming the Nashorn for writing JavaFX apps in Javascript Tool from mihosoft called JSelect for quickly switching JDKs Apache Maven Javadoc Plugin 2.9.1 Released Proposal: Java Concurrency Stress tests (jcstress) Slide-free Code-driven session at SV JUG JavaOne approvals/rejects gone out

    Read the article

  • Global Cache CR Requested But Current Block Received

    - by Liu Maclean(???)
    ????????«MINSCN?Cache Fusion Read Consistent» ????,???????????? ??????????????????: SQL> select * from V$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com ?11gR2 2??RAC??????????status???XG,????Xcurrent block???INSTANCE 2?hold?,?????INSTANCE 1?????????,?????: SQL> select * from test; ID ---------- 1 2 SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from test; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 89233 1 89233 1 SQL> alter system flush buffer_cache; System altered. INSTANCE 1 Session A: SQL> update test set id=id+1 where id=1; 1 row updated. INSTANCE 1 Session B: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1755287 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump gc_elements 255; Statement processed. SQL> oradebug tracefile_name; /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_ora_19111.trc GLOBAL CACHE ELEMENT DUMP (address: 0xa4ff3080): id1: 0x15c91 id2: 0x1 pkey: OBJ#76896 block: (1/89233) lock: X rls: 0x0 acq: 0x0 latch: 3 flags: 0x20 fair: 0 recovery: 0 fpin: 'kdswh11: kdst_fetch' bscn: 0x0.146e20 bctx: (nil) write: 0 scan: 0x0 lcp: (nil) lnk: [NULL] lch: [0xa9f6a6f8,0xa9f6a6f8] seq: 32 hist: 58 145:0 118 66 144:0 192 352 197 48 121 113 424 180 58 LIST OF BUFFERS LINKED TO THIS GLOBAL CACHE ELEMENT: flg: 0x02000001 lflg: 0x1 state: XCURRENT tsn: 0 tsh: 2 addr: 0xa9f6a5c8 obj: 76896 cls: DATA bscn: 0x0.1ac898 BH (0xa9f6a5c8) file#: 1 rdba: 0x00415c91 (1/89233) class: 1 ba: 0xa9e56000 set: 5 pool: 3 bsz: 8192 bsi: 0 sflg: 3 pwc: 0,15 dbwrid: 0 obj: 76896 objn: 76896 tsn: 0 afn: 1 hint: f hash: [0x91f4e970,0xbae9d5b8] lru: [0x91f58848,0xa9f6a828] lru-flags: debug_dump obj-flags: object_ckpt_list ckptq: [0x9df6d1d8,0xa9f6a740] fileq: [0xa2ece670,0xbdf4ed68] objq: [0xb4964e00,0xb4964e00] objaq: [0xb4964de0,0xb4964de0] st: XCURRENT md: NULL fpin: 'kdswh11: kdst_fetch' tch: 2 le: 0xa4ff3080 flags: buffer_dirty redo_since_read LRBA: [0x19.5671.0] LSCN: [0x0.1ac898] HSCN: [0x0.1ac898] HSUB: [1] buffer tsn: 0 rdba: 0x00415c91 (1/89233) scn: 0x0000.001ac898 seq: 0x01 flg: 0x00 tail: 0xc8980601 frmt: 0x02 chkval: 0x0000 type: 0x06=trans data ??????block: (1/89233)?GLOBAL CACHE ELEMENT DUMP?LOCK????X ??XG , ??????Current Block????Instance??modify???,????????????? ????Instance 2 ????: Instance 2 Session C: SQL> update test set id=id+1 where id=2; 1 row updated. Instance 2 Session D: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1756658 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump gc_elements 255; Statement processed. SQL> oradebug tracefile_name; /s01/orabase/diag/rdbms/vprod/VPROD2/trace/VPROD2_ora_13038.trc GLOBAL CACHE ELEMENT DUMP (address: 0x89fb25a0): id1: 0x15c91 id2: 0x1 pkey: OBJ#76896 block: (1/89233) lock: XG rls: 0x0 acq: 0x0 latch: 3 flags: 0x20 fair: 0 recovery: 0 fpin: 'kduwh01: kdusru' bscn: 0x0.1acdf3 bctx: (nil) write: 0 scan: 0x0 lcp: (nil) lnk: [NULL] lch: [0x96f4cf80,0x96f4cf80] seq: 61 hist: 324 21 143:0 19 16 352 329 144:6 14 7 352 197 LIST OF BUFFERS LINKED TO THIS GLOBAL CACHE ELEMENT: flg: 0x0a000001 state: XCURRENT tsn: 0 tsh: 1 addr: 0x96f4ce50 obj: 76896 cls: DATA bscn: 0x0.1acdf6 BH (0x96f4ce50) file#: 1 rdba: 0x00415c91 (1/89233) class: 1 ba: 0x96bd4000 set: 5 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15 dbwrid: 0 obj: 76896 objn: 76896 tsn: 0 afn: 1 hint: f hash: [0x96ee1fe8,0xbae9d5b8] lru: [0x96f4d0b0,0x96f4cdc0] obj-flags: object_ckpt_list ckptq: [0xbdf519b8,0x96f4d5a8] fileq: [0xbdf519d8,0xbdf519d8] objq: [0xb4a47b90,0xb4a47b90] objaq: [0x96f4d0e8,0xb4a47b70] st: XCURRENT md: NULL fpin: 'kduwh01: kdusru' tch: 1 le: 0x89fb25a0 flags: buffer_dirty redo_since_read remote_transfered LRBA: [0x11.9e18.0] LSCN: [0x0.1acdf6] HSCN: [0x0.1acdf6] HSUB: [1] buffer tsn: 0 rdba: 0x00415c91 (1/89233) scn: 0x0000.001acdf6 seq: 0x01 flg: 0x00 tail: 0xcdf60601 frmt: 0x02 chkval: 0x0000 type: 0x06=trans data GCS CLIENT 0x89fb2618,6 resp[(nil),0x15c91.1] pkey 76896.0 grant 2 cvt 0 mdrole 0x42 st 0x100 lst 0x20 GRANTQ rl G0 master 1 owner 2 sid 0 remote[(nil),0] hist 0x94121c601163423c history 0x3c.0x4.0xd.0xb.0x1.0xc.0x7.0x9.0x14.0x1. cflag 0x0 sender 1 flags 0x0 replay# 0 abast (nil).x0.1 dbmap (nil) disk: 0x0000.00000000 write request: 0x0000.00000000 pi scn: 0x0000.00000000 sq[(nil),(nil)] msgseq 0x1 updseq 0x0 reqids[6,0,0] infop (nil) lockseq x2b8 pkey 76896.0 hv 93 [stat 0x0, 1->1, wm 32768, RMno 0, reminc 18, dom 0] kjga st 0x4, step 0.0.0, cinc 20, rmno 6, flags 0x0 lb 0, hb 0, myb 15250, drmb 15250, apifrz 0 ?Instance 2??????block: (1/89233)? GLOBAL CACHE ELEMENT Lock Convert?lock: XG ????GC_ELEMENTS DUMP???XCUR Cache Fusion?,???????X$ VIEW,??? X$LE X$KJBR X$KJBL, ???X$ VIEW???????????????????: INSTANCE 2 Session D: SELECT * FROM x$le WHERE le_addr IN (SELECT le_addr FROM x$bh WHERE obj IN (SELECT data_object_id FROM dba_objects WHERE owner = 'SYS' AND object_name = 'TEST') AND class = 1 AND state != 3); ADDR INDX INST_ID LE_ADDR LE_ID1 LE_ID2 ---------------- ---------- ---------- ---------------- ---------- ---------- LE_RLS LE_ACQ LE_FLAGS LE_MODE LE_WRITE LE_LOCAL LE_RECOVERY ---------- ---------- ---------- ---------- ---------- ---------- ----------- LE_BLKS LE_TIME LE_KJBL ---------- ---------- ---------------- 00007F94CA14CF60 7003 2 0000000089FB25A0 89233 1 0 0 32 2 0 1 0 1 0 0000000089FB2618 PCM Resource NAME?[ID1][ID2],[BL]???, ID1?ID2 ??blockno? fileno????, ??????????GC_elements dump?? id1: 0x15c91 id2: 0×1 pkey: OBJ#76896 block: (1/89233)?? ,?  kjblname ? kjbrname ??”[0x15c91][0x1],[BL]” ??: INSTANCE 2 Session D: SQL> set linesize 80 pagesize 1400 SQL> SELECT * 2 FROM x$kjbl l 3 WHERE l.kjblname LIKE '%[0x15c91][0x1],[BL]%'; ADDR INDX INST_ID KJBLLOCKP KJBLGRANT KJBLREQUE ---------------- ---------- ---------- ---------------- --------- --------- KJBLROLE KJBLRESP KJBLNAME ---------- ---------------- ------------------------------ KJBLNAME2 KJBLQUEUE ------------------------------ ---------- KJBLLOCKST KJBLWRITING ---------------------------------------------------------------- ----------- KJBLREQWRITE KJBLOWNER KJBLMASTER KJBLBLOCKED KJBLBLOCKER KJBLSID KJBLRDOMID ------------ ---------- ---------- ----------- ----------- ---------- ---------- KJBLPKEY ---------- 00007F94CA22A288 451 2 0000000089FB2618 KJUSEREX KJUSERNL 0 00 [0x15c91][0x1],[BL][ext 0x0,0x 89233,1,BL 0 GRANTED 0 0 1 0 0 0 0 0 76896 SQL> SELECT r.* FROM x$kjbr r WHERE r.kjbrname LIKE '%[0x15c91][0x1],[BL]%'; no rows selected Instance 1 session B: SQL> SELECT r.* FROM x$kjbr r WHERE r.kjbrname LIKE '%[0x15c91][0x1],[BL]%'; ADDR INDX INST_ID KJBRRESP KJBRGRANT KJBRNCVL ---------------- ---------- ---------- ---------------- --------- --------- KJBRROLE KJBRNAME KJBRMASTER KJBRGRANTQ ---------- ------------------------------ ---------- ---------------- KJBRCVTQ KJBRWRITER KJBRSID KJBRRDOMID KJBRPKEY ---------------- ---------------- ---------- ---------- ---------- 00007F801ACA68F8 1355 1 00000000B5A62AE0 KJUSEREX KJUSERNL 0 [0x15c91][0x1],[BL][ext 0x0,0x 0 00000000B48BB330 00 00 0 0 76896 ??????Instance 1???block: (1/89233),??????Instance 2 build cr block ????Instance 1, ?????????? ????? Instance 1? Foreground Process ? Instance 2?LMS??????RAC  TRACE: Instance 2: [oracle@vrh2 ~]$ ps -ef|grep ora_lms|grep -v grep oracle 23364 1 0 Apr29 ? 00:33:15 ora_lms0_VPROD2 SQL> oradebug setospid 23364 Oracle pid: 13, Unix process pid: 23364, image: [email protected] (LMS0) SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD2/trace/VPROD2_lms0_23364.trc Instance 1 session B : SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1756658 3 1756661 3 1755287 Instance 1 session A : SQL> alter session set events '10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high'; Session altered. SQL> select * from test; ID ---------- 2 2 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1761520 ?x$BH?????,???????Instance 1???build??CR block,????? TRACE ??: Instance 1 foreground Process: PARSING IN CURSOR #140336527348792 len=18 dep=0 uid=0 oct=3 lid=0 tim=1335939136125254 hv=1689401402 ad='b1a4c828' sqlid='c99yw1xkb4f1u' select * from test END OF STMT PARSE #140336527348792:c=2999,e=2860,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=1357081020,tim=1335939136125253 EXEC #140336527348792:c=0,e=40,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939136125373 WAIT #140336527348792: nam='SQL*Net message to client' ela= 6 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1335939136125420 *** 2012-05-02 02:12:16.125 kclscrs: req=0 block=1/89233 2012-05-02 02:12:16.125574 : kjbcro[0x15c91.1 76896.0][4] *** 2012-05-02 02:12:16.125 kclscrs: req=0 typ=nowait-abort *** 2012-05-02 02:12:16.125 kclscrs: bid=1:3:1:0:f:1e:0:0:10:0:0:0:1:2:4:1:20:0:0:0:c3:49:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:1c:0:4d:26:a3:52:0:0:0:0:c7:c:ca:62:c3:49:0:0:0:0:1:0:14:8e:47:76:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:99:ed:0:0:0:0:0:0:10:0:0:0 2012-05-02 02:12:16.125718 : kjbcro[0x15c91.1 76896.0][4] 2012-05-02 02:12:16.125751 : GSIPC:GMBQ: buff 0xba0ee018, queue 0xbb79a7b8, pool 0x60013fa0, freeq 0, nxt 0xbb79a7b8, prv 0xbb79a7b8 2012-05-02 02:12:16.125780 : kjbsentscn[0x0.1ae0f0][to 2] 2012-05-02 02:12:16.125806 : GSIPC:SENDM: send msg 0xba0ee088 dest x20001 seq 177740 type 36 tkts xff0000 mlen x1680198 2012-05-02 02:12:16.125918 : kjbmscr(0x15c91.1)reqid=0x8(req 0xa4ff30f8)(rinst 1)hldr 2(infosz 200)(lseq x2b8) 2012-05-02 02:12:16.126959 : GSIPC:KSXPCB: msg 0xba0ee088 status 30, type 36, dest 2, rcvr 1 *** 2012-05-02 02:12:16.127 kclwcrs: wait=0 tm=1233 *** 2012-05-02 02:12:16.127 kclwcrs: got 1 blocks from ksxprcv WAIT #140336527348792: nam='gc cr block 2-way' ela= 1233 p1=1 p2=89233 p3=1 obj#=76896 tim=1335939136127199 2012-05-02 02:12:16.127272 : kjbcrcomplete[0x15c91.1 76896.0][0] 2012-05-02 02:12:16.127309 : kjbrcvdscn[0x0.1ae0f0][from 2][idx 2012-05-02 02:12:16.127329 : kjbrcvdscn[no bscn <= rscn 0x0.1ae0f0][from 2] ???? kjbcro[0x15c91.1 76896.0][4] kjbsentscn[0x0.1ae0f0][to 2] ?Instance 2??SCN=1ae0f0=1761520? block: (1/89233),???’gc cr block 2-way’ ??,?????????CR block? Instance 2 LMS TRACE 2012-05-02 02:12:15.634057 : GSIPC:RCVD: ksxp msg 0x7f16e1598588 sndr 1 seq 0.177740 type 36 tkts 0 2012-05-02 02:12:15.634094 : GSIPC:RCVD: watq msg 0x7f16e1598588 sndr 1, seq 177740, type 36, tkts 0 2012-05-02 02:12:15.634108 : GSIPC:TKT: collect msg 0x7f16e1598588 from 1 for rcvr -1, tickets 0 2012-05-02 02:12:15.634162 : kjbrcvdscn[0x0.1ae0f0][from 1][idx 2012-05-02 02:12:15.634186 : kjbrcvdscn[no bscn1, wm 32768, RMno 0, reminc 18, dom 0] kjga st 0x4, step 0.0.0, cinc 20, rmno 6, flags 0x0 lb 0, hb 0, myb 15250, drmb 15250, apifrz 0 GCS CLIENT END 2012-05-02 02:12:15.635211 : kjbdowncvt[0x15c91.1 76896.0][1][options x0] 2012-05-02 02:12:15.635230 : GSIPC:AMBUF: rcv buff 0x7f16e1c56420, pool rcvbuf, rqlen 1103 2012-05-02 02:12:15.635308 : GSIPC:GPBMSG: new bmsg 0x7f16e1c56490 mb 0x7f16e1c56420 msg 0x7f16e1c564b0 mlen 152 dest x101 flushsz -1 2012-05-02 02:12:15.635334 : kjbmslset(0x15c91.1)) seq 0x4 reqid=0x6 (shadow 0xb48bb330.xb)(rsn 2)(mas@1) 2012-05-02 02:12:15.635355 : GSIPC:SPBMSG: send bmsg 0x7f16e1c56490 blen 184 msg 0x7f16e1c564b0 mtype 33 attr|dest x30101 bsz|fsz x1ffff 2012-05-02 02:12:15.635377 : GSIPC:SNDQ: enq msg 0x7f16e1c56490, type 65521 seq 118669, inst 1, receiver 1, queued 1 *** 2012-05-02 02:12:15.635 kclccctx: cleanup copy 0x7f16e1d94798 2012-05-02 02:12:15.635479 : [kjmpmsgi:compl][type 36][msg 0x7f16e1598588][seq 177740.0][qtime 0][ptime 1257] 2012-05-02 02:12:15.635511 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 1, dcx 0xbc516778, inst 1, rcvr 1 qlen 0 1 2012-05-02 02:12:15.635536 : GSIPC:BSEND: no batch1 msg 0x7f16e1c56490 type 65521 len 184 dest (1:1) 2012-05-02 02:12:15.635557 : kjbsentscn[0x0.1ae0f1][to 1] 2012-05-02 02:12:15.635578 : GSIPC:SENDM: send msg 0x7f16e1c56490 dest x10001 seq 118669 type 65521 tkts x10002 mlen xb800e8 WAIT #0: nam='gcs remote message' ela= 180 waittime=1 poll=0 event=0 obj#=0 tim=1335939135635819 2012-05-02 02:12:15.635853 : GSIPC:RCVD: ksxp msg 0x7f16e167e0b0 sndr 1 seq 0.177741 type 32 tkts 0 2012-05-02 02:12:15.635875 : GSIPC:RCVD: watq msg 0x7f16e167e0b0 sndr 1, seq 177741, type 32, tkts 0 2012-05-02 02:12:15.636012 : GSIPC:TKT: collect msg 0x7f16e167e0b0 from 1 for rcvr -1, tickets 0 2012-05-02 02:12:15.636040 : kjbrcvdscn[0x0.1ae0f1][from 1][idx 2012-05-02 02:12:15.636060 : kjbrcvdscn[no bscn <= rscn 0x0.1ae0f1][from 1] 2012-05-02 02:12:15.636082 : GSIPC:TKT: dest (1:1) rtkt not acked 1  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:12:15.636102 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:12:15.636125 : [kjmxmpm][type 32][seq 0.177741][msg 0x7f16e167e0b0][from 1] 2012-05-02 02:12:15.636146 : kjbmpocr(0xb0.6)seq 0x1,reqid=0x23a,(client 0x9fff7b58,0x1)(from 1)(lseq xdf0) 2????LMS????????? ??gcs remote message GSIPC ????SCN=[0x0.1ae0f0] block=1/89233???,??BAST kjbmpbast(0x15c91.1),?? block=1/89233??????? ??fairness??(?11.2.0.3???_fairness_threshold=2),?current block?KCL: F156: fairness downconvert,?Xcurrent DownConvert? Scurrent: Instance 2: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 2 0 3 1756658 ??Instance 2 LMS ?cr block??? kjbmslset(0x15c91.1)) ????SEND QUEUE GSIPC:SNDQ: enq msg 0x7f16e1c56490? ???????Instance 1???? block: (1/89233)??? ??????: Instance 2: SQL> select CURRENT_RESULTS,LIGHT_WORKS from v$cr_block_server; CURRENT_RESULTS LIGHT_WORKS --------------- ----------- 29273 437 Instance 1 session A: SQL> SQL> select * from test; ID ---------- 2 2 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1761942 3 1761932 1 0 3 1761520 Instance 2: SQL> select CURRENT_RESULTS,LIGHT_WORKS from v$cr_block_server; CURRENT_RESULTS LIGHT_WORKS --------------- ----------- 29274 437 select * from test END OF STMT PARSE #140336529675592:c=0,e=337,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939668940051 EXEC #140336529675592:c=0,e=96,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939668940204 WAIT #140336529675592: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1335939668940348 *** 2012-05-02 02:21:08.940 kclscrs: req=0 block=1/89233 2012-05-02 02:21:08.940676 : kjbcro[0x15c91.1 76896.0][5] *** 2012-05-02 02:21:08.940 kclscrs: req=0 typ=nowait-abort *** 2012-05-02 02:21:08.940 kclscrs: bid=1:3:1:0:f:21:0:0:10:0:0:0:1:2:4:1:20:0:0:0:c3:49:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:1f:0:4d:26:a3:52:0:0:0:0:c7:c:ca:62:c3:49:0:0:0:0:1:0:17:8e:47:76:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:99:ed:0:0:0:0:0:0:10:0:0:0 2012-05-02 02:21:08.940799 : kjbcro[0x15c91.1 76896.0][5] 2012-05-02 02:21:08.940833 : GSIPC:GMBQ: buff 0xba0ee018, queue 0xbb79a7b8, pool 0x60013fa0, freeq 0, nxt 0xbb79a7b8, prv 0xbb79a7b8 2012-05-02 02:21:08.940859 : kjbsentscn[0x0.1ae28c][to 2] 2012-05-02 02:21:08.940870 : GSIPC:SENDM: send msg 0xba0ee088 dest x20001 seq 177810 type 36 tkts xff0000 mlen x1680198 2012-05-02 02:21:08.940976 : kjbmscr(0x15c91.1)reqid=0xa(req 0xa4ff30f8)(rinst 1)hldr 2(infosz 200)(lseq x2b8) 2012-05-02 02:21:08.941314 : GSIPC:KSXPCB: msg 0xba0ee088 status 30, type 36, dest 2, rcvr 1 *** 2012-05-02 02:21:08.941 kclwcrs: wait=0 tm=707 *** 2012-05-02 02:21:08.941 kclwcrs: got 1 blocks from ksxprcv 2012-05-02 02:21:08.941818 : kjbassume[0x15c91.1][sender 2][mymode x1][myrole x0][srole x0][flgs x0][spiscn 0x0.0][swscn 0x0.0] 2012-05-02 02:21:08.941852 : kjbrcvdscn[0x0.1ae28d][from 2][idx 2012-05-02 02:21:08.941871 : kjbrcvdscn[no bscn ??????????????SCN=[0x0.1ae28c]=1761932 Version?CR block, ????receive????Xcurrent Block??SCN=1ae28d=1761933,Instance 1???Xcurrent Block???build????????SCN=1761932?CR BLOCK, ????????Current block,?????????'gc current block 2-way'? ?????????????request current block,?????kjbcro;?????Instance 2?LMS???????Current Block: Instance 2 LMS trace: 2012-05-02 02:21:08.448743 : GSIPC:RCVD: ksxp msg 0x7f16e14a4398 sndr 1 seq 0.177810 type 36 tkts 0 2012-05-02 02:21:08.448778 : GSIPC:RCVD: watq msg 0x7f16e14a4398 sndr 1, seq 177810, type 36, tkts 0 2012-05-02 02:21:08.448798 : GSIPC:TKT: collect msg 0x7f16e14a4398 from 1 for rcvr -1, tickets 0 2012-05-02 02:21:08.448816 : kjbrcvdscn[0x0.1ae28c][from 1][idx 2012-05-02 02:21:08.448834 : kjbrcvdscn[no bscn <= rscn 0x0.1ae28c][from 1] 2012-05-02 02:21:08.448857 : GSIPC:TKT: dest (1:1) rtkt not acked 2  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:21:08.448875 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:21:08.448970 : [kjmxmpm][type 36][seq 0.177810][msg 0x7f16e14a4398][from 1] 2012-05-02 02:21:08.448993 : kjbmpbast(0x15c91.1) reqid=0x6 (req 0xa4ff30f8)(reqinst 1)(reqid 10)(flags x0) *** 2012-05-02 02:21:08.449 kclcrrf: req=48054 block=1/89233 *** 2012-05-02 02:21:08.449 kcl_compress_block: compressed: 6 free space: 7680 2012-05-02 02:21:08.449085 : kjbsentscn[0x0.1ae28d][to 1] 2012-05-02 02:21:08.449142 : kjbdeliver[to 1][0xa4ff30f8][10][current 1] 2012-05-02 02:21:08.449164 : kjbmssch(reqlock 0xa4ff30f8,10)(to 1)(bsz 344) 2012-05-02 02:21:08.449183 : GSIPC:AMBUF: rcv buff 0x7f16e18bcec8, pool rcvbuf, rqlen 1102 *** 2012-05-02 02:21:08.449 kclccctx: cleanup copy 0x7f16e1d94838 *** 2012-05-02 02:21:08.449 kcltouched: touch seconds 3271 *** 2012-05-02 02:21:08.449 kclgrantlk: req=48054 2012-05-02 02:21:08.449347 : [kjmpmsgi:compl][type 36][msg 0x7f16e14a4398][seq 177810.0][qtime 0][ptime 1119] WAIT #0: nam='gcs remote message' ela= 568 waittime=1 poll=0 event=0 obj#=0 tim=1335939668449962 2012-05-02 02:21:08.450001 : GSIPC:RCVD: ksxp msg 0x7f16e1bb22a0 sndr 1 seq 0.177811 type 32 tkts 0 2012-05-02 02:21:08.450024 : GSIPC:RCVD: watq msg 0x7f16e1bb22a0 sndr 1, seq 177811, type 32, tkts 0 2012-05-02 02:21:08.450043 : GSIPC:TKT: collect msg 0x7f16e1bb22a0 from 1 for rcvr -1, tickets 0 2012-05-02 02:21:08.450060 : kjbrcvdscn[0x0.1ae28e][from 1][idx 2012-05-02 02:21:08.450078 : kjbrcvdscn[no bscn <= rscn 0x0.1ae28e][from 1] 2012-05-02 02:21:08.450097 : GSIPC:TKT: dest (1:1) rtkt not acked 3  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:21:08.450116 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:21:08.450136 : [kjmxmpm][type 32][seq 0.177811][msg 0x7f16e1bb22a0][from 1] 2012-05-02 02:21:08.450155 : kjbmpocr(0xb0.6)seq 0x1,reqid=0x23e,(client 0x9fff7b58,0x1)(from 1)(lseq xdf4) ???Instance 2??LMS???,???build cr block,??????Instance 1?????Current Block??????Instance 2??v$cr_block_server??????LIGHT_WORKS?????current block transfer??????,??????? CR server? Light Work Rule(Light Work Rule?8i Cr Server?????????,?Remote LMS?? build CR????????,resource holder?LMS???????block,????CR build If creating the consistent read version block involves too much work (such as reading blocks from disk), then the holder sends the block to the requestor, and the requestor completes the CR fabrication. The holder maintains a fairness counter of CR requests. After the fairness threshold is reached, the holder downgrades it to lock mode.)? ??????? CR Request ????Current Block?? ???:??????class?block,CR server??????? ??undo block?? undo header block?CR quest, LMS????Current Block, ????? ???? ??????? block cleanout? CR  Version??????? ???????? data blocks, ??????? CR quest  & CR received?(???????Light Work Rule,LMS"??"), ??Current Block??DownConvert???S lock,??LMS???????ship??current version?block? ??????? , ?????? ,???????DownConvert?????”_fairness_threshold“???200,????Xcurrent Block?????Scurrent, ????LMS?????Current Version?Data Block: SQL> show parameter fair NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ _fairness_threshold integer 200 Instance 1: SQL> update test set id=id+1 where id=4; 1 row updated. Instance 2: SQL> update test set id=id+1 where id=2; 1 row updated. SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1838166 ?Instance 1? ????,? ??instance 2? v$cr_block_server?? instance 1 SQL> select * from test; ID ---------- 10 3 instance 2: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1883707 8 0 SQL> select * from test; ID ---------- 10 3 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1883707 8 0 ................... SQL> / STATE CR_SCN_BAS ---------- ---------- 2 0 3 1883707 3 1883695 repeat cr request on Instance 1 SQL> / STATE CR_SCN_BAS ---------- ---------- 8 0 3 1883707 3 1883695 ??????_fairness_threshold????????,?????200 ????????CR serve??Downgrade?lock, ????data block? CR Request????Receive? Current Block?

    Read the article

  • Paying by Cash

    - by David Dorf
    I'll grant you paying by cash in the context of stores isn't particularly interesting, but in my quest to try new payment methods I decided to pay by cash at an online store. Using a credit card means I have to hoist myself off the couch, find the card, and enter all those digits. Google Checkout certainly makes that task easier by storing my credit card information, but what happens to all those people that don't have a credit card? What about the ones that are afraid to use credit cards over the internet. There are three main options for cash payment, not all of which are accepted by every merchant. The most popular is PayPal. The issue I have with them is that returns and disputes have to be handled with PayPal, not the merchant. I once used PayPal at a shady online store and lost my money. Yeah, my bad but they wouldn't help me at all. PayPal was purchased by eBay in 2002. BillMeLater is best for larger purchases, because at checkout they actually run a credit check to make sure you're credit worthy. Assuming you are, they pay the merchant on your behalf and mail you a bill, which you better pay quickly or interest will start to accrue. That's nice for the merchant because they get paid right away, and I presume there's no charge-backs. BillMeLater was purchased by eBay in 2008. Last night I tried eBillMe for the first time. After checkout, they send you a bill via email and expect you to pay either via online banking (they provide the instructions to set everything up) or walk-in locations across the US (typically banks). The process was quick and easy. The merchant doesn't ship the product until the bill is paid, so there's a day or two delay. For the merchant there are no charge-backs, and the fees are less than credit cards. For the shopper, they provide buyer protection similar to that offered by credit cards, and 1% cashback on purchases. Once the online bill-pay is setup, its easy to reuse in the future. Seems like a win-win for merchants and shoppers.

    Read the article

  • On Writing Blogs

    - by Tony Davis
    Why are so many blogs about IT so difficult to read? Over at SQLServerCentral.com, we do a special subscription-only newsletter called Database Weekly. Every other week, it is my turn to look through all the blogs, news and events that might be of relevance to people working with databases. We provide the title, with the link, and a short abstract of what you can expect to read. It is a popular service with close to a million subscribers. You might think that this is a happy and fascinating task. Sometimes, yes. If a blog comes to the point quickly, and says something both interesting and original, then it has our immediate attention. If it backs up what it says with supporting material, then it is more-or-less home and dry, featured in DBW's list. If it also takes trouble over the formatting and presentation, maybe with an illustration or two and any code well-formatted, then we are agog with joy and it is marked as a must-visit destination in our blog roll. More often, however, a task that should be fun becomes a routine chore, and the effort of trawling so many badly-written blogs is enough to make any conscientious Health & Safety officer whistle through their teeth at the risk to the editor's spiritual and psychological well-being. And yet, frustratingly, most blogs could be improved very easily. There is, I believe, a simple formula for a successful blog. First, choose a single topic that is reasonably fresh and interesting. Second, get to the point quickly; explain in the first paragraph exactly what the blog is about, and then stay on topic. In writing the first paragraph, you must picture yourself as a pilot, hearing the smooth roar of the engines as your plane gracefully takes air. Too often, however, the accompanying sound is that of the engine stuttering before the plane veers off the runway into a field, and a wheel falls off. The author meanders around the topic without getting to the point, and takes frequent off-radar diversions to talk about themselves, or the weather, or which friends have recently tagged them. This might work if you're J.D Salinger, or James Joyce, but it doesn't help a technical blog. Sometimes, the writing is so convoluted that we are entirely defeated in our quest to shoehorn its meaning into a simple summary sentence. Finally, write simply, in plain English, and in a conversational way such that you can read it out loud, and sound natural. That's it! If you could also avoid any references to The Matrix then this is a bonus but is purely personal preference. Cheers, Tony.

    Read the article

  • Book Review: Inside Windows Communicat?ion Foundation by Justin Smith

    - by Sam Abraham
    In gearing up for a new major project, I have taken it upon myself to research and review various aspects of our Microsoft stack of choice seeking new creative ways for us to leverage in our upcoming state-of-the-art solution projected to position us ahead of the competition. While I am a big supporter of search engines and online articles as a quick and usually reliable source of information, I have opted in my investigative quest to actually “hit the books”.  I have also made it a habit to provide quick reviews for material I go over hoping this can be of help to someone who may be looking for items others may have had success using for reference. I have started a few months ago by investigating better ways to implementing, profiling and troubleshooting SQL Server 2008. My reference of choice was Itzik Ben-Gan et al’s “Inside Microsoft SQL Server 2008” series. While it has been a month since my last book review, this by no means meant that I have been sitting idle. It has been pretty challenging to balance research with the continuous flow of projects and deadlines all while balancing that with my family duties which, of course, always comes first. In this post, I will be providing a quick review of my latest reading: Inside Windows Communication Foundation by Justin Smith. This book has been on my reading list for a very long time and I am proud to have finally tackled it. Justin’s book presents a great coverage of WCF internals. His simple, concise and well-worded style has simplified the relatively complex internals of WCF and made it comprehensible. Justin opted to organize the book into three parts: an introduction to WCF, coverage of the Channel Layer and a look at WCF internals at the ServiceModel layer. Part I introduced the concepts and made the case behind WCF while covering a simplified version of WCF’s message patterns, endpoints and contracts. In Part II, Justin provided a thorough coverage of the internals of Messages, Channels and Channel Managers. Part III concluded this nice reading with coverage of Bindings, Contracts, Dispatchers and Clients. While one would not likely need to extend WCF at that low level of the API, an understanding of the inner-workings of WCF is a must to avoid pitfalls mainly caused by misinformation or erroneous assumptions. Problems can quickly arise in high-traffic hosted solutions, but most can be easily avoided with some minimal time investment and education. My next goal is to pay a closer look at WCF from the programmer’s API perspective now that I have acquired a better understanding of its inner working.   Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group.   Stay tuned for more… All the best, --Sam

    Read the article

  • DBCC MEMUSAGE in 2005/8 ?

    - by steveh99999
    I used to like using undocumented command DBCC MEMUSAGE in SQL 2000 to see which tables were using space in SQL data cache. In SQL 2005, this command is not longer present. Instead a DMV – sys.dm_os_buffer_descriptors – can be used to display data cache contents,  but this doesn’t quite give you the same output as DBCC MEMUSAGE. I’m also aware that you can use Quest’s spotlight tool to view a summary of data cache contents. Using  this post by Umachandar Jayachandran  of Microsoft, I was able to create the following equivalent for SQL 2005/8. I’ve wrapped Umachandar’s original query in a CTE to produce summary information :- ;WITH memusage_CTE AS (SELECT bd.database_id, bd.file_id, bd.page_id, bd.page_type , COALESCE(p1.object_id, p2.object_id) AS object_id , COALESCE(p1.index_id, p2.index_id) AS index_id , bd.row_count, bd.free_space_in_bytes, CONVERT(TINYINT,bd.is_modified) AS 'DirtyPage' FROM sys.dm_os_buffer_descriptors AS bd JOIN sys.allocation_units AS au ON au.allocation_unit_id = bd.allocation_unit_id OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.hobt_id = au.container_id AND au.type IN (1, 3) ) AS p1 OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.partition_id = au.container_id AND au.type = 2 ) AS p2 WHERE  bd.database_id = DB_ID() AND bd.page_type IN ('DATA_PAGE', 'INDEX_PAGE') ) SELECT TOP 20 DB_NAME(database_id) AS 'Database',OBJECT_NAME(object_id,database_id) AS 'Table Name', index_id,COUNT(*) AS 'Pages in Cache', SUM(dirtyPage) AS 'Dirty Pages' FROM memusage_CTE GROUP BY database_id, object_id, index_id ORDER BY COUNT(*) DESC I’m not 100% happy with the results of the above query however… I’ve noticed that on a busy BizTalk messageBox database  it will return information on pages that contain GHOST rows – . ie where data has already been deleted but has yet to be cleaned-up by a background process – I’m need to investigate further why cache on this server apparently contains so much GHOST data… For more information on the background ghost cleanup process, see this article by Paul Randall. However, I think the results of this query should still be of interest to a DBA. I have another post to come shortly regarding an example I encountered where this information proved useful to me… I notice in SQL 2008, sys.dm_os_buffer_descriptors gained an extra column – numa_mode – I’m interested to see how this is populated and how useful this column can be on a NUMA-enabled system. I’m assuming in theory you could use this column to help analyse how your tables are spread across Numa-enabled data-cache ?

    Read the article

  • Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic

    - by Asian Angel
    Are you looking for a way to manage your Twitter, Facebook, Google Buzz, LinkedIn, and Foursquare accounts all in one place? Using the Seesmic Web App for Chrome and Iron you can access your favorite accounts and manage them in a single, simple-to-use interface. A feature that we loved from the start was the ability to access Twitter without creating a special Seesmic account. And in these days of multiple accounts who needs another one to complicate things up? All that you need to do is to sign in with your user name/e-mail along with your password. You do have to authorize access for Seesmic to connect with your account but the whole process (login & authorization) is handled in a single window instance. Now on to a quick look at some of the UI features… The sidebar allows you to add additional columns to the main interface, set your favorite location for Trends, and tie in additional social services as desired. You can also access additional options and controls in the upper right corner. When you are ready to start tweeting click in the blank at the top and enter your text, etc. in the convenient drop-down window that appears. Another nice perk is the ability to switch to a black and grey theme if the white is too bright for your needs. The Seesmic web app provides a simple-to-use, highly efficient way to manage your Twitter account and other favorite social services in a single tab interface. Seesmic [Chrome Web Store] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Manage Your Favorite Social Accounts in Chrome and Iron with Seesmic E.T. II – Extinction [Fake Movie Sequel Video] Remastered King’s Quest Games Offer Classic Gaming on Modern Machines Compare Your Internet Cost and Speed to Global Averages [Infographic] Orbital Battle for Terra Wallpaper WizMouse Enables Mouse Over Scrolling on Any Window

    Read the article

  • European e-government Action Plan all about interoperability

    - by trond-arne.undheim
    Yesterday, the European Commission released its European eGovernment Action Plan for 2011-2015. The plan includes measures on providing deeper user empowerment, enhancing the Internal Market, more efficiency and effectiveness of public administrations, and putting in place pre-conditions for developing e-government. The Good - Defines interoperability very clearly. Calls interoperability "a pre-condition for cross-border eGovernment services" (a very strong formulation) and says interoperability "is supported by open specifications". - Uses the terminology "open specifications" which, let's face it, is pretty close to "open standards" which is the term the rest of the world would use. - Confirms that Member States are fully committed to the political priorities of the Malmö Declaration (which was all about open standards) including the very strong action: by 2013: All Member States will have incorporated the political priorities of the Malmö Declaration in their national strategies. Such tight Action Plan integration between Commission and Member State priorities has seldom been attempted before, particularly not in a field where European legal competence is virtually non-existent. What we see now, is the subtle force of soft power rather than the rough force of regulation. In this case, it is the Member States who want Europe to take the lead. Very refreshing! Some quotes that show the commitment to interoperability and open specifications: "The emergence of innovative technologies such as "service-oriented architectures" (SOA), or "clouds" of services,  together with more open specifications which allow for greater sharing, re-use and interoperability reinforce the ability of ICT to play a key role in this quest for effficiency in the public sector." (p.4) "Interoperability is supported through open specifications" (p.13) 2.4.1. Open Specifications and Interoperability (p.13 has a whole section dedicated to this important topic. Open specifications and interoperability are nearly 100% interrelated): "Interoperability is the ability of systems and machines to exchange, process and correctly interpret information. It is more than just a technical challenge, as it also involves legal, organisational and semantic aspects of handling  data" (p.13) "standards and  open platforms offer opportunities for more cost-effective use of resources and delivery of services" (p.13). The Bad Shies away from defining open standards, or even open specifications, the EU's preferred term for the key enabler of interoperability. Verdict 90/100, a very respectable score.

    Read the article

  • Coding different states in Adventure Games

    - by Cardin
    I'm planning out an adventure game, and can't figure out what's the right way to implement the behaviour of a level depending on state of story progression. My single-player game features a huge world where the player has to interact with people in a town at various points in the game. However, depending on story progression, different things would be presented to the player, for e.g. the Guild Leader will change locations from the town square to various locations around the city; Doors would only unlock at certain times of the day after finishing a particular routine; Different cut-screen/trigger events happen only after a particular milestone has been reached. I naively thought of using a switch{} statement initially to decide what the NPC should say or which he could be found at, and making quest objectives interact-able only after checking a global game_state variable's condition. But I realised I would quickly run into a lot of different game states and switch-cases in order to change the behaviour of an object. That switch statement would also be massively hard to debug, and I guess it might also be hard to use in a level editor. So I thought, instead of having a single object with multiple states, maybe I should have multiple instances of the same object, with a single state. That way, if I use something like a level editor, I can put an instance of the NPC at all the different locations he could ever appear at, and also an instance for each conversation state he has. But that means there'll be a lot of inactive, invisible game objects floating around the level, which might be trouble for memory, or simply hard to see in a level editor, i don't know. Or simply, make an identical, but separate level for each game state. This feels the cleanest and bug-free way to do things, but it feels like massive manual work making sure each version of the level is really identical to each other. All my methods feel so inefficient, so to recap my question, is there a better or standardised way to implement behaviour of a level depending on state of story progression? PS: I don't have a level editor yet - thinking of using something like JME SDK or making my own.

    Read the article

  • Silverlight Cream for November 26, 2011 -- #1175

    - by Dave Campbell
    In this Issue: Michael Washington, Manas Patnaik, Jeff Blankenburg, Doug Mair, Jon Galloway, Richard Bartholomew, Peter Bromberg, Joel Reyes, Zeben Chen, Navneet Gupta, and Cathy Sullivan. Above the Fold: Silverlight: "Using ASP.NET PageMethods With Silverlight" Peter Bromberg WP7: "Leveraging Background Services and Agents in Windows Phone 7 (Mango)" Jon Galloway Metro/WinRT/Windows8: "Debugging Contracts using Windows Simulator" Cathy Sullivan LightSwitch: "LightSwitch: It Is About The Money (It Is Always About The Money)" Michael Washington Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com:LightSwitch: It Is About The Money (It Is Always About The Money)Michael Washington has a very nice post up about LightSwitch apps in general and his opinion about the future use... based on what he and I have been up to, I tend to agree on all counts!Accessing Controls from DataGrid ColumnHeader – SilverlightManas Patnaik's latest post is about using the VisualTreeHelper class to iterate through the visual tree to find the controls you need ... including sample code31 Days of Mango | Day #18: Using Sample DataJeff Blankenburg's Day 18 in his 31-Day Mango quest is on Sample Data using Expression Blend, and he begins with great links to his other Blend posts followed by a nice sample data tutorial and source31 Days of Mango | Day #19: Tilt EffectsDoug Mair returns to the reigns of Jeff's 31-Days series with number 19 which is all about Tilt Effects ... as seen in the Phone application when you select a user... Doug shows how to add this effect to your appLeveraging Background Services and Agents in Windows Phone 7 (Mango)Jon Galloway has a WP7 post up discussing Background Services and how they all fit together... he's got a great diagram of that as an overview then really nice discussion of each followed up by his slides from DevConnections, and codeNetflix on Windows 8This one isn't C#/XAML, but Richard Bartholomew has a Netflix on Windows 8 app running that bears noticeUsing ASP.NET PageMethods With SilverlightPeter Bromberg has a post up demonstrating calling PageMethods from a Silverlight app using the ScriptManager controlAWESOME Windows Phone Power ToolJoel Reyes announced the release of a full-featured tool for side-loading apps to your WP7 device... available at codeplexMicrosoft Windows Simulator Rotation and Resolution EmulationZeben Chen discusses the Windows 8 Simulator a bit deeper with this code-laden post showing how to look at roation and orientation-aware apps and resolution.First look at Windows SimulatorNavneet Gupta has a great into post to using the simulator in VS2011 for Windows 8 apps. Four things you really need this for: Touch Emulation, Rotation, Different target resolutions, and ContractsDebugging Contracts using Windows SimulatorCathy Sullivan shows how to debug W8 Contracts in VS2011... why you ask? because when you hit one in the debugger, the target app disappears.. but enter the simulator... check it outStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Repository and Ticket management in a Windows Environment

    - by saifkhan
    I’ve been using AxoSoft’s bug tracking application for a while, although and excellent piece of software I had some issues with it ·         It was SLOOOW (both desktop and web). I don’t care what Axosoft says, I tired multiple servers etc. I’ve been long enough in this field to tell you when something is not right with an app. ·         The cost! It’s not feasible for a small team.   I must say though, that they have some nice features which are not commonly found on other bug tracking software. I wouldn’t go on to list any here. I would prefer you download and try their app and see for yourself. In my quest to find a replacement, I tried a few. The successor had to satisfy the following ·         A 99.99% Windows Environment. ·         Bug Tracking. ·         Ticket Management (power users and project managers can open tickets on projects). ·         Repository (I decided to merge bug tracking and repository to get my team to be more productive). ·         Unlimited users. ·         Cost. Being the head of IT security for the firm I work for, making the decision to move data offsite was a hard decision to make, but turned out to be one I am not regretting so far. My choice was down to Altassian JIRA and codebaseHQ. I ended up going with the latter… (I still love the greenhopper from Altassian…its freaking cool!) CodebaseHQ is nice and simple and has all the features I needed. I’ve been using them for a few months now and very happy. Their pricing…well, see for yourself. I was also able to get our SVN data… (Yes, SVN! I don’t go near the Visual Sourcesafe thing…it’s not that safe (pardon the pun). I am hearing some nice things about TFS 2010) over to codebaseHQ. We use VisualSVN to access repositories. …so if you are a Windows developer (or team) codebaseHQ is worth checking out!

    Read the article

  • Data Structures usage and motivational aspects

    - by Aubergine
    For long student life I was always wondering why there are so many of them yet there seems to be lack of usage at all in many of them. The opinion didn't really change when I got a job. We have brilliant books on what they are and their complexities, but I never encounter resources which would actually give a good hint of practical usage. I perfectly understand that I have to look at problem , analyse required operations, look for data structure that does them efficiently. However in practice I never do that, not because of human laziness syndrome, but because when it comes to work I acknowledge time priority over self-development. Over time I thought that when I would be better developer I will automatically use more of them - that didn't happen at all or maybe I just didn't. Then I found that the colleagues usually in the same plate as me - knowing more or less some three of data structures and being totally happy about it and refusing to discuss this matter further with me, coming back to conversations about 'cool new languages' 'libraries that do jobs for you' and the joy to work under scrumban etc. I am stuck with ArrayLists, Arrays and SortedMap , which no matter what I do always suffice or either I tweak them to be capable of fulfilling my task. Yes, it might be inefficient but do we really have to care if Intel increases performance over years no matter if we improve our skills? Does new Xeon or IBM machines really care what we use? What if I like build things, but I am not particularly excited whether it is n log(n) or just n? Over twenty years the processing power increased enormously, which gives us freedom of not being critical about which one to use? On top of that new more optimized languages appear which support multiple cores more efficiently. To be more specific: I would like to find motivational material on complex real areas/cases of possible effective usages of data structures. I would be really grateful if you would provide relevant resources. There is similar question ,but in the end the links again mostly describe or do dumb example(vehicles, students or holy grail quest - yes, very relevant) them and people keep referring to the "scenario decides the data structure to use". I want to know these complex scenarios to be able to identify similarities to my scenario and then use them. The complex scenarios where it really matters and not necessarily of quantitive nature. It seems that data structures only concern is efficiency and nothing else? There seems to be no particular convenience for developer in use one over another. (only when I found scientific resources on why exactly simple carbohydrates are evil I stopped eating sugar and candies completely replacing it with less harmful fruits - I hope you can see the analogy)

    Read the article

  • Contract Lifecycle Management for Public Sector

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One thing Oracle never seems to get enough credit for is its consistent quest to improve its products, even the ones as established as its back-office solutions. Here is another example of one of the latest improvements: Contract Lifecycle Management for Public Sector, or CLM. The latest EBS module geared specifically for the Federal acquisition community. Our existing customers have been asking Oracle for years to upgrade its Advanced Procurement Suite to meet the complex procurement processes of the Federal Government. You asked; we listened. Oracle, with direct input from Federal agencies, subject matter experts, integration partners, and the Federal acquisition community, has expanded and deepened its procurement suite to meet the unique demands of the Federal acquisition community. New benefits/features include: Contract Line Item/Sub-Line Item (CLIN/SLIN) structures Configurable Document Numbering Complex Pricing Contract Types ( as per FAR Part 16) Option lines and exercising of options Incremental Funding capability Support for multiple document types (delivery orders, BPA call orders, awards, agreements, IDIQ contracts) Requisition lines to fund modifications Workload assignment and milestones Contract Action Reporting to FPDS-NG I’ve been conducting many tests over the past few months and have been quite impressed with the depth of features and the seamless integration to Federal Financials, specifically the funds control within the financials. Again, thank you for reading. If you have suggestions for future posts, please leave them in the comments section and I’ll take it from there.

    Read the article

  • How to discriminate from two nodes with identical frequencies in a Huffman's tree?

    - by Omega
    Still on my quest to compress/decompress files with a Java implementation of Huffman's coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment. From the Wikipedia page, I quote: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. Now, emphasis: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. So I have to take two nodes with the lowest frequency. What if there are multiple nodes with the same low frequency? How do I discriminate which one to use? The reason I ask this is because Wikipedia has this image: And I wanted to see if my Huffman's tree was the same. I created a file with the following content: aaaaeeee nnttmmiihhssfffouxprl And this was the result: Doesn't look so bad. But there clearly are some differences when multiple nodes have the same frequency. My questions are the following: What is Wikipedia's image doing to discriminate the nodes with the same frequency? Is my tree wrong? (Is Wikipedia's image method the one and only answer?) I guess there is one specific and strict way to do this, because for our school assignment, files that have been compressed by my program should be able to be decompressed by other classmate's programs - so there must be a "standard" or "unique" way to do it. But I'm a bit lost with that. My code is rather straightforward. It literally just follows Wikipedia's listed steps. The way my code extracts the two nodes with the lowest frequency from the queue is to iterate all nodes and if the current node has a lower frequency than any of the two "smallest" known nodes so far, then it replaces the highest one. Just like that.

    Read the article

  • OPN Specialized Partner Activities at Collaborate 2012

    - by Get_Specialized!
    If your a Partner planning to attend the Collaborate 2012 event, April 22-26th in Las Vegas, Oracle Partner Network (OPN) team members attending welcome meeting you onsite. Whether you are interested in being a new Partner, or you are a long standing Partner seeking an update on OPN programs or Partner Specialization, we welcome meeting with you 1 on 1. In fact, we might drop by your booth or session to further recognize you for your OPN Specialization accomplishments! If you are also  participating in Social Media while at the event, let us know that as well. In addition, we are also seeking to meet Partners, while at Collaborate 2012, who may be interested in speaking at Oracle OpenWorld on their OPN Specialization program accomplishments and customer successes. Understanding that Partners can be busy staffing their own booths, we welcome meeting you when the exhibit hall is closed. Or if you want a break away from your booth, we are glad to meet  on the exhibit hall floor Oracle Validated Integration Lounge - OAUG & Quest member Booth 1679. To learn more or to schedule a meeting on site Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} contact us

    Read the article

  • Meet This Year's Most Impressive WebCenter Customer Projects

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle Fusion Middleware: Meet This Year's Most Impressive Customer Projects Oracle OpenWorld Session – Tuesday Oct. 2, 2012: Moscone West, Room 3001 at 11:45AM This year – the Oracle Excellence awards had an amazing number of nominations. Each group at Oracle had a challenge to select the most innovative and game-changing nominations for their winners. The Fusion Middleware Innovation Awards, jointly sponsored by Oracle, OAUG, QUEST, ODTUG, IOUG, AUSOUG and UKOUG, honor organizations using Oracle Fusion Middleware to deliver unique business value.  This year, the awards will recognize customers across eight distinct categories: Oracle Exalogic Cloud Application Foundation Service Integration (SOA) and BPM WebCenter Identity Management Data Integration Application Development Framework and Fusion Development Business Analytics (BI, EPM and Exalytics)  The nominations included the pioneers in our customer base using these solutions in innovative ways to achieve significant business value. Tune in this afternoon for a listing of the WebCenter winners.

    Read the article

  • The Birth of SSAS Compare

    - by Red Gate Software BI Tools Team
    Noemi Moreno, Red Gate Business Intelligence Specialist Software vendors – even Microsoft – tend to forget about the needs of business intelligence developers. We are a rare and rather invisible species. For example, BIDS remained in VS 2008 until SQL Server 2012. It took until this release before we got something as simple as an “undo” function. Before I joined Red Gate as a BI specialist, I worked on SQL Development. I’ll never forget the time I discovered Red Gate’s SQL Compare tool and how it reduced the task of preparing a database release from a couple of days to ten minutes. When I moved to SSAS, MDX and cubes, I became frustrated with the deployment process because I couldn’t find a tool that made Cube releases as easy as they are with SQL Compare. This became my quest. I pitched the idea to a few people in Red Gate’s regular Down Tools Week, when everyone puts down their day-to-day tasks and works on their own projects. My task was to reason with a roomful of cynical developers, hardened to the blandishments of project managers, for help to develop a tool that would compare two different SSAS databases and create the script to process only the objects that needed processing, thereby reducing release time to only a few minutes. I walked to the podium and gave them the full story of the distressed BI specialists, doomed to spend tedious hours preparing deployment scripts. A few developers recovered from their torpor to cast a languid eye at my presentation. It wasn’t enough. In a sudden impulse, I blurted out a promise to perform a flamenco dance for just the team if the tool was able to successfully compare two SSAS databases and generate a script by the end of the week. I was lucky enough that some of them believed me and jumped in: David Pond (Dev), Matt Burton (Dev), Tilman Bregler (Dev), Shobana Sekar (Test), Ruchija Raj (Test), Nick Sutherland (Product Manager) and Irma Tanovic (BI). They didn’t know that Irma and I would be away on a conference in Amsterdam and would leave them without our support. But to my surprise, they had a working tool by the time we came back – basic, and with a few bugs, but a working tool nonetheless! Seeing it compare a very basic SSAS database, detect the changes and generate the scripts was amazing! Something that normally takes half a day was done in under a minute. Since then, a few months have passed and a BI Tools team has been created at Red Gate to work full time on BI tools for BI developers, starting with SSAS Compare. How cool is that? So download the free beta and give us your feedback. And the flamenco? I still need to deliver that. Tilman reminds me every day! I need to get the full flamenco costume.

    Read the article

  • So, I though I wanted to learn frontend/web development and break out of my comfort zone...

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >