Search Results

Search found 27852 results on 1115 pages for 'oracle openworld blog team'.

Page 569/1115 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • What's the difference between View Criteria and Where clause?

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} A View Criteria is a filter that you apply programmatically or by definition to a View Object instance. It augments the WHERE clause in a View Object query. Named View Criteria are defined in the Query panel of the View Object and are used ·         In combination with the af:query component to build search forms. To do this, you drag and drop the View Criteria from the Named View Criteria node of the View Object in the Data Controls Panel. In the context menu, you then select the Query component - optionally with a result table ·         To restrict a View Object instance in the Application Module model. For this, select a View object instance in the right hand list of the ADF Business Component Data Model panel. Use the Edit button to add a View Criteria to the View Object instance. This ensures that the View Object instance also runs with a query filter applied. View Criteria use bind variables for query conditions that you want to pass in dynamically at runtime. Beside of the ability to apply View Criteria declaratively, you can apply them programmatically in Java. A WHERE clause, if added to a View Object query by design restricts all instances of this View Object, which usually is not what developers want. Because of the benefits - and the configuration options not explained above but in the product documentation referenced below - the recommendation is to use View Criteria. The product documentation explains View Criteria in chapter 5 of the Developer Guide: http://download.oracle.com/docs/cd/E15523_01/web.1111/b31974/bcquerying.htm#BCGIFHHF

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • SQLAuthority News – Updates on Contests, Books and SQL Server

    - by pinaldave
    There are lots of things happening on this blog and I feel sometime it is difficult to keep up. One of the suggestion I keep on receiving if there is a single page where one can visit and know the updates. I did consider of the same at some point but in era of RSS Feed it is difficult to have proper audience to that page. Here are few updates on various contest and books give away in recent time. Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend – I have received so many entries for this contest. Many have sent me email asking if this contest can be extended by couple of days. For the same the deadline for this contest is now Nov 10th 7 AM. You can send your entries by that time. The prize is 2 combo set of Joes 2 Pros is of USD 444. If you have not take part in the contest please take part now. Guess What is in the box? – There were many entry for this contest. We played this contest on blog as well, facebook. The answer of this contest was announced in 2 days in blog post announcing my new book. The winner was Manas Dash from Bangalore. He answered “The box will contain SQL book authored by Vinod and Pinal”. This was the closest answer we received. Win 5 SQL Programming Book Contest will have winner announced by Nov 15th and winners will be sent email. Win 5 SQL Wait Stats Book Contest is closed and winners have been sent their award. My third book SQL Server Interview Questions and Answers have run out of stock in India in 36 hours of its launch. We are working very hard to make it available again. Thank you again for excellent support! Without your participation all the give away have no significance. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Java Spotlight Episode 85: Migrating from Spring to JavaEE 6

    - by Roger Brinkley
    Interview with Bert Ertman and Paul Bakker on migrating from Spring to JavaEE 6. Joining us this week on the Java All Star Developer Panel is Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Transactional Interceptors in Java EE 7 Larry Ellison and Mark Hurd on Oracle Cloud Duke’s Choice Award submissions open until June 15 Registration for the 2012 JVM Lanugage Summit now open Events June 11-14, Cloud Computing Expo, New York City June 12, Boulder JUG June 13, Denver JUG June 13, Eclipse Juno DemoCamp, Redwoood Shore June 13, JUG Münster June 14, Java Klassentreffen, Vienna, Austria June 18-20, QCon, New York City June 20, 1871, Chicago June 26-28, Jazoon, Zurich, Switzerland July 5, Java Forum, Stuttgart, Germany July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewBert Ertman is a Fellow at Luminis in the Netherlands. Next to his customer assignments he is responsible for stimulating innovation, knowledge sharing, coaching, technology choices and presales activities. Besides his day job he is a Java User Group leader for NLJUG, the Dutch Java User Group. A frequent speaker on Enterprise Java and Software Architecture related topics at international conferences (e.g. Devoxx, JavaOne, etc) as well as an author and member of the editorial advisory board for Dutch software development magazine: Java Magazine. In 2008, Bert was honored by being awarded the coveted title of Java Champion by an international panel of Java leaders and luminaries. Paul Bakker is senior software engineer at Luminis Technologies where he works on the Amdatu platform, an open source, service-oriented application platform for web applications. He has a background as trainer where he teached various Java related subjects. Paul is also a regular speaker on conferences and author for the Dutch Java Magazine.TutorialsPart 1: http://howtojboss.com/2012/04/17/article-series-migrating-spring-applications-to-java-ee-6-part-1/Part 2: http://howtojboss.com/2012/04/17/article-series-migrating-spring-applications-to-java-ee-6-part-2/Part 3: http://howtojboss.com/2012/05/10/article-series-migrating-from-spring-to-java-ee-6-part-3/   Mail Bag What’s Cool Sang Shin in EE team @larryellison JavaOne content selection is almost complete-Notifications coming soon

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Error Actions

    - by pinaldave
    This blog post is inspired from SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 4. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Error Actions – A Primer. In the article we discussed various basics terminology of the error handling. The article further covers following important concepts of error handling. Introduction to SQL Error Actions Statement Termination Scope Abortion Batch Termination Above three are the most important concepts related to error handling and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to ON? You will get Statement Termination. You will get Scope Abortion. You will get Batch Abortion. You will get Connection Termination. SQL Server will pick the error action. 2.) Which SQL Server error action happens for errors with a severity of 11-16 when you set the XACT_ABORT setting to OFF? You will get Statement Termination You will get Scope Abortion You will get Batch Abortion You will get Connection Termination SQL Server will pick the error action Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 5 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Copying & Pasting Rows Between Grids in SQL Developer

    - by thatjeffsmith
    Apologies for slacking on the blogging front here lately. Still mentally hung over from Open World, and lots of things going on behind the scenes here in Oracle-land. Whilst (love that word) blogging is part of my job, it’s not the ONLY part of my job So a super-quick and dirty ‘trick’ this morning. Copying Query Result Record as New Row in Table Copy and paste is something everyone ‘gets.’ I don’t know we have to thank for that, whether it’s Microsoft or Xerox, but it’s been ingrained in our way of dealing with all things computers. Almost to the detriment of some of our users – they’ll use Copy and Paste when perhaps our Export feature is superior, but I digress. Where it does work just fine is when you want to create a new row in your table that matches a row you have retrieved from an executed query. Just click in the gutter or row number to get the entire row selected Once you have your data selected, do your thing, i.e. ctrl+C or Command/Apple+C or whatever. Now open your view or table editor, go to the data page, and ask for a new row. New record, no data Paste in the data from the clipboard. It’s smart enough to paste the separate values out to the separate columns. The clipboard saves the day, again. If your columns orders are different, just change the order in the grids. If you have extra information, don’t copy the entire row. I know, I know – Jeff this is too simple, why are you wasting our time here? It seems intuitive, but how many of you actually tried this before reading it just now? I seem to get more positive feedback from the very basic user interface 101 tips than the esoteric click-click-click-ctrl-shift-click tricks I prefer to post. Lots of interesting stuff on tap, so stay tuned!

    Read the article

  • Go Big or Go Special

    - by Ajarn Mark Caldwell
    Watching Shark Tank tonight and the first presentation was by Mango Mango Preserves and it highlighted an interesting contrast in business trends today and how to capitalize on opportunities.  <Spoiler Alert> Even though every one of the sharks was raving about the product samples they tried, with two of them going for second and third servings, none of them made a deal to invest in the company.</Spoiler>  In fact, one of the sharks, Kevin O’Leary, kept ripping into the owners with statements to the effect that he thinks they are headed over a financial cliff because he felt their costs were way out of line and would be their downfall if they didn’t take action to radically cut costs. He said that he had previously owned a jams and jellies business and knew the cost ratios that you had to have to make it work.  I don’t doubt he knows exactly what he’s talking about and is 100% accurate…for doing business his way, which I’ll call “Go Big”.  But there’s a whole other way to do business today that would be ideal for these ladies to pursue. As I understand it, based on his level of success in various businesses and the fact that he is even in a position to be investing in other companies, Kevin’s approach is to go mass market (Go Big) and make hundreds of millions of dollars in sales (or something along that scale) while squeezing out every ounce of cost that you can to produce an acceptable margin.  But there is a very different way of making a very successful business these days, which is all about building a passionate and loyal community of customers that are rooting for your success and even actively trying to help you succeed by promoting your product or company (Go Special).  This capitalizes on the power of social media, niche marketing, and The Long Tail.  One of the most prolific writers about capitalizing on this trend is Seth Godin, and I hope that the founders of Mango Mango pick up a couple of his books (probably Purple Cow and Tribes would be good starts) or at least read his blog.  I think the adoration expressed by all of the sharks for the product is the biggest hint that they have a remarkable product and that they are perfect for this type of business approach. Both are completely valid business models, and it may certainly be that the scale at which Kevin O’Leary wants to conduct business where he invests his money is well beyond the long tail, but that doesn’t mean that there is not still a lot of money to be made there.  I wish them the best of luck with their endeavors!

    Read the article

  • First Ever MySQL on Windows Online Forum - March 16, 2011

    - by monica.kumar
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} Now you might be thinking…what’s an Online Forum? Well, think of it as a virtual conference, where you can attend a series of presentations about a given topic, from the comfort of your own office/home. On Wednesday March 16th, from 9.00 am PT to 12.00, we will be running the first ever MySQL Online Forum, dedicated to MySQL on Windows. Register now to learn how you can reduce your database TCO on Windows by up to 90% while increasing manageability & flexibility!   Oracle’s MySQL Vice President of Engineering Tomas Ulin will kick off a comprehensive agenda of presentations enabling you to better understand:   How you can save up to 90% by using MySQL on WindowsWhy the world’s most popular open source database is extremely popular on Windows, both for enterprise users and for embedding by ISVs How MySQL is a great fit for the Windows environment, and what are the upcoming milestones to make MySQL even better on the Microsoft platform What are the visual tools at your disposal to effectively develop, deploy and manage MySQL applications on Windows How you can deliver highly available business critical Windows based MySQL applications Why Security Solutions Provider SonicWall selected MySQL over Microsoft SQL Server, and how they successfully deliver MySQL based solutions Plus, as we’ll have Live Chat On during the entire forum, you’ll be able to ask questions at any time to MySQL experts online. Register Now!   Whether you’re an ISV or an enterprise user, either already running MySQL on Windows or simply considering it, join us and learn how you can get performance, lower TCO and increased manageability & flexibility with MySQL on Windows!

    Read the article

  • Twitte API for Java - Hello Twitter Servlet (TOTD #178)

    - by arungupta
    There are a few Twitter APIs for Java that allow you to integrate Twitter functionality in a Java application. This is yet another API, built using JAX-RS and Jersey stack. I started this effort earlier this year and kept delaying to share because wanted to provide a more comprehensive API. But I've delayed enough and releasing it as a work-in-progress. I'm happy to take contributions in order to evolve this API and make it complete, useful, and robust. Drop a comment on the blog if you are interested or ping me at @arungupta. How do you get started ? Just add the following to your "pom.xml": <dependency> <groupId>org.glassfish.samples</groupId> <artifactId>twitter-api</artifactId> <version>1.0-SNAPSHOT</version></dependency> The implementation of this API uses Jersey OAuth Filters for authentication with Twitter and so the following dependencies are required if any API that requires authentication, which is pretty much all the APIs ;-) <dependency> <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-client</artifactId>     <version>${jersey.version}</version> </dependency> <dependency>     <groupId>com.sun.jersey.contribs.jersey-oauth</groupId>     <artifactId>oauth-signature</artifactId>     <version>${jersey.version}</version> </dependency> Once the dependencies are added to your project, inject Twitter  API in your Servlet (or any other Java EE component) as: @Inject Twitter twitter; Here is a simple non-secure invocation of the API to get you started: SearchResults result = twitter.search("glassfish", SearchResults.class);for (SearchResultsTweet t : result.getResults()) { out.println(t.getText() + "<br/>");} This code returns the tweets that matches the query "glassfish". The source code for the complete project can be downloaded here. Download it, unzip, and mvn package will build the .war file. And then deploy it on GlassFish or any other Java EE 6 compliant application server! The source code for the API also acts as the javadocs and can be checked out from here. A more detailed sample using security and several other API from this library is coming soon!

    Read the article

  • ObjectStorageHelper<T> now available for Windows 8 RTM

    - by jamiet
    In October 2011 I wrote a blog post entitled ObjectStorageHelper<T> – A WinRT utility for Windows 8 where I introduced a little utility class called ObjectStorageHelper<T> that I had been working on while noodling around on the Developer Preview of Windows 8. ObjectStorageHelper<T> makes it easy for anyone building apps for Windows 8 to save data to files. How easy? As easy as this: var myPoco = new Poco() { IntProp = 1, StringProp = "one" }; var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); await objectStorageHelper.SaveAsync(myPoco); Compare that to the plumbing code that you would have to write otherwise: var Obj = new Poco() { IntProp = 1, StringProp = "one" }; StorageFile file = null; StorageFolder folder = GetFolder(storageType); file = await folder.CreateFileAsync(FileName(Obj), CreationCollisionOption.ReplaceExisting); IRandomAccessStream writeStream = await file.OpenAsync(FileAccessMode.ReadWrite); using (Stream outStream = Task.Run(() => writeStream.AsStreamForWrite()).Result) {     serializer.Serialize(outStream, Obj);     await outStream.FlushAsync(); } and you can see how ObjectStorageHelper<T> can help save a Windows 8 developer quite a few headaches. ObjectStorageHelper<T> simply requires you to pass it an object to be saved, tell it where to save it (Roaming, Local or Temporary), and you’re done. Retrieving an object from storage is equally as simple: var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); var myPoco = await objectStorageHelper.LoadAsync(); Please check the homepage for the project at http://winrtstoragehelper.codeplex.com/ for (much) more info. A number of people have used and tested ObjectStorageHelper<T> since those early days and one of those folks in particular, David Burela, was good enough to report a couple of bugs: Saving Asynchronously Save fails when class is in another project As a result of David’s bug reports and some more extensive testing on my side I have overhauled the initial code that I wrote last October and am confident that it is now much more robust and ready for primetime (check the commit history if you’re interested). The source code (which, again, you can find on Codeplex at http://winrtstoragehelper.codeplex.com/) includes a suite of unit tests to test all of the basic use cases (if you can think of any more please let me know). If you use this in any of your Windows 8 projects then please let me know. I love getting feedback and I’d also love to know if this is actually being used anywhere. @Jamiet

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • Cool Enhancements Everyone Can Enjoy

    - by Ruth
    With Release 17, we have a few visual and functional enhancements that make using CRM On Demand that much better for us all. I'll mention a few here, but to get the full outline of these upgrades, I recommend taking 10 minutes to view the Release 17 Usability Transfer of Information course. First and foremost, I find the ability to customize your theme (or skin) pretty cool, but I've said that before. Take a look at the Selecting Your Theme and the Themes - Create Your CRM Style blog articles for more information. My next favorite is the resizeable user interface (UI). CRM On Demand will dynamically fit the device and screen resolution you're using, which includes the resizing of fields, field editors and pop-ups. If you have a wide screen like me, you should appreciate that one very much. To make it easier to see that resized UI, the detail pages got a little face lift. New horizontal lines and other subtle changes make those pages easier to read. Also, those things you need to know, like error messages and inline help are highlighted with a little icon to show the message type. You may not think every change to the detail pages are particularly exciting, but I'm sure you'll enjoy the new Head Up Display, which saves you scrolling time by adding links to related information sections. I like that the head up display travels with me as I move up and down the page...it's like a little friend that takes me where I want to go as fast as possible. You may also really like the fact that the copy record feature is now available for all record types from both detail pages and lists. Your company administrator can choose which fields get copied, so you can maximize your efficiency when creating new records. Lists also got a face lift. Alternating colors in rows make it easier to see your data. Also, the Favorite Lists icon is now on the list itself, so you can save your most useful lists with one click. If you've ever tried to create a new list with 10 columns or more, you'll be happy to hear that the maximum number of columns in a list has increased from 9 to 20. This is great news, but doesn't mean you should include the kitchen sink in your list...excess columns can slow list performance. So choose your columns wisely. Again, these are just a few of my favorite things. Let us know what you think about the new usability features. What are your favorite things?

    Read the article

  • how do you manage application performance reviews

    - by CoolBeans
    I have been trying to figure out ways to effectively do performance reviews before an install happens for all releases done by our team. Do you usually make this a part of code review process or do you handle it as a separate review task? FYI - we do not have a dedicated performance testing team. It is up to the developers to make sure the app performs well. The apps I am referring to are web applications.

    Read the article

  • Keeping up with New Releases

    - by Jeremy Smyth
    You can keep up with the latest developments in MySQL software in a number of ways, including various blogs and other channels. However, for the most correct (if somewhat dry and factual) information, you can go directly to the source.  Major Releases  For every major release, the MySQL docs team creates and maintains a "nutshell" page containing the significant changes in that release. For the current GA release (whatever that is) you'll find it at this location: https://dev.mysql.com/doc/mysql/en/mysql-nutshell.html  At the moment, this redirects to the summary notes for MySQL 5.6. The notes for MySQL 5.7 are also available at that website, at the URL http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html, and when eventually that version goes GA, it will become the currently linked notes from the URL shown above. Incremental Releases  For more detail on each incremental release, you can have a look at the release notes for each revision. For MySQL 5.6, the release notes are stored at the following location: http://dev.mysql.com/doc/relnotes/mysql/5.6/en/ At the time I write this, the topmost entry is a link for MySQL 5.6.15. Each linked page shows the changes in that particular version, so if you are currently running 5.6.11 and are interested in what bugs were fixed in versions since then, you can look at each subsequent release and see all changes in glorious detail. One really clever thing you can do with that site is do an advanced Google search to find exactly when a feature was released, and find out its release notes. By using the preceding link in a "site:" directive in Google, you can search only within those pages for an entry. For example, the following Google search shows pages within the release notes that reference the --slow-start-timeout option:     site:http://dev.mysql.com/doc/relnotes/mysql/ "--slow-start-timeout" By running that search, you can see that the option was added in MySQL 5.6.5 and also rolled into MySQL 5.5.20.   White Papers Also, with each major release you can usually find a white paper describing what's new in that release. In MySQL 5.6 there was a "What's new" whitepaper at this location: http://www.mysql.com/why-mysql/white-papers/whats-new-mysql-5-6/ You'll find other white papers at: http://www.mysql.com/why-mysql/white-papers/ Search the page for "5.6" to see any papers dealing specificallly with that version.

    Read the article

  • Ed Burns' Servlet 4/HTTP 2 Session at JavaOne 2014

    - by reza_rahman
    For the Java EE track at JavaOne 2014 we are highlighting some key sessions and speakers to better inform you of what you can expect, right up until the start of the conference. To this end we recently interviewed Ed Burns. Ed is a veteran of Sun and now Oracle. He has been and is instrumental in pushing the JSF ecosystem forward as specification lead. Besides his specification lead work Ed is well regarded as an author and speaker on his own right. In addition to carrying the JSF torch Ed will be co-leading the key Servlet 4 specification for Java EE 8, along with Servlet specification guru Shing Wai Chan. The primary goal of Servlet 4 is to enable the fundamentally important changes in HTTP 2 for the entire server-side Java ecosystem. We wanted to talk to Ed about his Servlet 4 session at JavaOne 2014 and HTTP 2 generally: The details for the Servlet 4 session can be found here. Ed has several other key sessions on the track that we hope to talk to him about separately in the near future: What’s Next for JSF?: In this key session, Ed will be sharing the next steps for the continued evolution of the JSF specification in Java EE 8. Where’s My UI? The 2014 JavaOne Web App UI Smackdown: The UI space for web applications, especially in the Java ecosystem continues to be as hotly contested as ever. This is especially true with the (re)introduction of JavaScript based rich client frameworks like AngularJS. This lively panel brings together experts representing the diverse schools of thought for web UIs. Ed will be representing JSF of course. Neal Ford will moderate the panel as an independent and hopefully reasonably neutral party. Adopt-a-JSR for Java EE 7 and Java EE 8: Adopt-a-JSR has been a reasonable success for Java EE 7. With Java EE 8 we are planning to strengthen it far more as away of getting grassroots level participation in the specification efforts. This session will introduce Adopt-a-JSR, share how it worked for Java EE 7 and what we plan to do with it in Java EE 8. Ed will be sharing his perspectives on Adopt-a-JSR for both Java EE 7 and Java EE 8. Besides Ed's sessions, we have a very strong program for the Java EE track and JavaOne overall - just explore the content catalog. If you can't make it, you can be assured that we will make key content available after the conference just as we have always done.

    Read the article

  • Advice on Project Management Software?

    - by Zenph
    I was wondering, does anybody work as part of a team, or as a project manager who highly recommends a certain project management solution (self-hosted or otherwise) ? Ideally I want something where I can manage the entire project, and also manage the financial side of things too. Should also add a few other things: notifications for team members for individual projects version control integration (like codebase) real time collaboration like chat

    Read the article

  • SQL Server Contains Equivalent

    - by Derek D.
    Many Oracle developers trying to find the SQL Server function compatible with their Contains clause will most likely accidently end up on this page. Therefore, this page will be devoted to them rather than the SQL Server’s Contains function which is used for full-text searching. The most similar function to Oracle’s contains is charindex. The usage [...]

    Read the article

  • Facebook Comments and page SEO

    - by Gaurav Gupta
    Facebook's recently launched commenting system for blogs loads comments in an iframe, instead of loading them inline. Since blog comments can often contribute significantly to the page's SEO, is it a good idea to use Facebook's system on my blog? Or, does Google recognize iframe content as a part of the page and treats it as such? (It's noteworthy that Disqus.com does not use iframes and loads all comments inline)

    Read the article

  • Trying to install wordpress inside rails app with nginx and fastcgi

    - by pinouchon
    I have a rails app (let's call it myapp) running at www.myapp.com. I want to add a wordpress blog at www.myapp.com/blog. The webserver for the rails app is thin (see the upstream block). The wordpress runs with php-fastcgi. The rails app works fine. My problem is the following: in /home/myapp/myapp/log/error.log error I get: 2013/06/24 10:19:40 [error] 26066#0: *4 connect() failed (111: Connection refused) while connecti\ ng to upstream, client: xx.xx.138.20, server: www.myapp.com, request: "GET /blog/ HTTP/1.1", \ upstream: "fastcgi://127.0.0.1:9000", host: "www.myapp.com" Here is the nginx conf file: upstream myapp { server unix:/tmp/thin_myapp.0.sock; server unix:/tmp/thin_myapp.1.sock; server unix:/tmp/thin_myapp2.sock; } server { listen 80; server_name www.myapp.com; client_max_body_size 20M; access_log /home/myapp/myapp/log/access.log; error_log /home/myapp/myapp/log/error.log error; root /home/myapp/myapp/public; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # Index HTML Files if (-f $document_root/cache/$uri/index.html) { rewrite (.*) /cache/$1/index.html break; } if (!-f $request_filename) { proxy_pass http://myapp; break; } # try_files /system/maintenance.html $uri $uri/index.html $uri.html @ruby; } location /blog/ { root /var/www/wordpress; fastcgi_index index.php; if (!-e $request_filename) { rewrite ^(.*)$ /blog/index.php?q=$1 last; } include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/wordpress$fastcgi_script_name; fastcgi_pass localhost:9000; # port to FastCGI } } Any ideas why that doesn't work ? How do I make sure that php-factcgi is configured properly ? Edit: I cant test if fastcgi is running with telnet: $> telnet 127.0.0.1 9000 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused And it's not.

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >