Search Results

Search found 14809 results on 593 pages for 'oracle etl sap'.

Page 472/593 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Two BULK INSERT issues I worked around recently

    - by AaronBertrand
    Since I am still afraid of SSIS, and because I am dealing mostly with CSV files and table structures that are relatively simple and require only one of the three letters in the acronym "ETL," I find myself using BULK INSERT a lot. I have been meaning to switch to using CLR, since I am doing a lot of file system querying using xp_cmdshell, but I haven't had the chance to really explore it yet. I know, a lot of you are probably thinking, wow, look at all those bad habits. But for every person thinking...(read more)

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • SSIS Training 15-19 Oct in Reston Virginia

    - by andyleonard
    Early bird registration is now open for Linchpin People ’s SSIS training course From Zero To SSIS scheduled for 15-19 Oct 2012 in Reston Virginia! Register today – the early bird discount ends 28 Sep 2012. Training Description From Zero to SSIS was developed by Andy Leonard to train technology professionals in the fine art of using SQL Server Integration Services (SSIS) to build data integration and Extract-Transform-Load (ETL) solutions. The training is focused around labs and emphasizes a hands-on...(read more)

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Business Intelligence avec SQL Server 2008 R2 de Sébastien Fantini, critique par Celinio Fernandes

    Bonjour La rédaction de DVP a lu pour vous l'ouvrage suivant: Business Intelligence avec SQL Server 2008 R2 de Sébastien Fantini paru aux Editions ENI [IMG]http://images-eu.amazon.com/images/P/274605566X.08.LZZZZZZZ.jpg[/IMG] Citation: Ce livre sur la Business Intelligence (BI) avec SQL Server 2008 R2, s'adresse à tous les membres d'une équipe décisionnelle : chef de projet, architecte, développeur ETL,...

    Read the article

  • Two BULK INSERT issues I worked around recently

    - by AaronBertrand
    Since I am still afraid of SSIS, and because I am dealing mostly with CSV files and table structures that are relatively simple and require only one of the three letters in the acronym "ETL," I find myself using BULK INSERT a lot. I have been meaning to switch to using CLR, since I am doing a lot of file system querying using xp_cmdshell, but I haven't had the chance to really explore it yet. I know, a lot of you are probably thinking, wow, look at all those bad habits. But for every person thinking...(read more)

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • Principles of an extensible data proxy

    - by Wesley
    There is a growing industry now with more than 30 companies playing in the Backend-As-A-Service (BaaS) market. The principle is simple: give companies a secure way of exposing data housed on premises and behind the firewall publicly. This can include database data, as well as Legacy PC data through established connectors; SAP for example provides a connector for transacting with their legacy systems. Early attempts were fixed providers for specific systems like SAP, IBM or Oracle, but the new breed is extensible, allowing Channel Partners and Consultants to build robust integration applications that can consume whatever data sources the client wants to expose. I just happen to be close to finishing a Cloud Based HTML5 application platform that provides robust integration services, and I would like to break ground on an extensible data proxy to complete the system. From what I can gather, I need to provide either an installable web service of some kind, or a Cloud service which the client can configure with VPN for interactions. Then I can build in connectors, which can be activated with a service account, and expose those transactions via web services of some kind (JSON, SOAP, etc). I can also provide a framework that allows people to build in their own connectors, and use some kind of schema to hook those connectors into the proxy. The end result is some kind of public facing web service that could securely be consumed by applications to show data through HTML5 on any device. My gut is, this isn't as hard as it sounds. Almost all of the 30+ companies (With more popping up almost weekly) have all come into existence in the last 18 months or so, which tells me either the root technology, or the skillset to create the technology is in abundance right now. Where should I start on this? Are there some open source projects I can leverage? A specific group of developers I can hire? I'm confident someone here can set me on the right path and save me some time. You don't see this many companies spring up this rapidly if they are all starting from scratch with proprietary technology. The Register: WTF is BaaS One Minute Video from Kony on their BaaS

    Read the article

  • APress Deal of the Day 2/June/2014 - Pro SQL Server 2012 Integration Services

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/02/apress-deal-of-the-day-2june2014---pro-sql-server.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430236924 is Pro SQL Server 2012 Integration Services. “Pro SQL Server 2012 Integration Services is your key to building powerful extract, transform, load (ETL) solutions using SQL Server 2012 Integration Services (SSIS).”

    Read the article

  • Using The Data Mining Query Task in SSIS

    SQL Server Integration Services (SSIS) is a Business Intelligence tool which can be used by database developers or administrators to perform Extract, Transform & Load (ETL) operations. In my previous article Using Analysis Services Processing Task & Analysis Services ... [Read Full Article]

    Read the article

  • Presenting Loading Data Warehouse Partitions with SSIS 2012 at SQL Saturday DC!

    - by andyleonard
    Join Darryll Petrancuri and me as we present Loading Data Warehouse Partitions with SSIS 2012 Saturday 8 Dec 2012 at SQL Saturday 173 in DC ! SQL Server 2012 table partitions offer powerful Big Data solutions to the Data Warehouse ETL Developer. In this presentation, Darryll Petrancuri and Andy Leonard demonstrate one approach to loading partitioned tables and managing the partitions using SSIS 2012, and reporting partition metrics using SSRS 2012. Objectives A practical solution for loading Big...(read more)

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

  • Presenting Loading Data Warehouse Partitions with SSIS 2012 at SQL Saturday DC!

    - by andyleonard
    Join Darryll Petrancuri and me as we present Loading Data Warehouse Partitions with SSIS 2012 Saturday 8 Dec 2012 at SQL Saturday 173 in DC ! SQL Server 2012 table partitions offer powerful Big Data solutions to the Data Warehouse ETL Developer. In this presentation, Darryll Petrancuri and Andy Leonard demonstrate one approach to loading partitioned tables and managing the partitions using SSIS 2012, and reporting partition metrics using SSRS 2012. Objectives A practical solution for loading Big...(read more)

    Read the article

  • Slowly Changing Dimensions handling in PowerPivot (and BISM?)

    - by Marco Russo (SQLBI)
    During the PowerPivot Workshop in London we received many interesting questions and Alberto had the inspiration to write this nice post about Slowly Changing Dimensions handling in PowerPivot. It is interesting the consideration about SCD Type I attributes in a SCD Type II dimension – you can probably generate them in a more dynamic way in PowerPivot (thanks to Vertipaq and DAX) instead of relying on a relational table containing all the data you need, which usually requires a more complex ETL process....(read more)

    Read the article

  • Free Webinar - Using Enterprise Data Integration Dashboards

    - by andyleonard
    Join Kent Bradshaw and me as we present Using Enterprise Data Integration Dashboards Tuesday 11 Dec 2012 at 10:00 AM ET! If data is the life of the modern organization, data integration is the heart of an enterprise. Data circulation is vital. Data integration dashboards provide enterprise ETL (Extract, Transform, and Load) teams near-real-time status supported with historical performance analysis. Join Linchpins Kent Bradshaw and Andy Leonard as they demonstrate and discuss the benefits of data...(read more)

    Read the article

  • OS8- AK8- The bad news...

    - by Steve Tunstall
    Ok I told you I would give you the bad news of AK8 to go along with all the cool new stuff, so here it is. It's not that bad, really, just things you need to be aware of. First, the 2013.1 code is being called OS8, AK8 and 2013.1 by different people. I mean different people INSIDE Oracle!! It was supposed to be easy, but it never is. So for the rest of this blog entry, I'm calling it AK8. AK8 is not compatible with the 7x10 series. Ever. The 7x10 series is not supported with AK8, and if you try to upgrade one, it will fail at the healthcheck. All 7x20 series, all of them regardless of age, are supported with AK8. Drive trays. Let's talk about drive trays and SAS cards. The older drive trays for the 7x20 series were called the "Riverwalk 2" or "DS2" trays. They were technically the "J4410" series JBODs that Sun used to sell a la carte before we stopped selling JBODs. Don't get me started on that, it still makes me mad. We used these for many years, and you can still buy them right now until December 15th, 2013, when they will no longer be sold. The DS2 tray only came as a 4u, 24 drive shelf. It held 3.5" drives, and you had a choice of 2TB, 3TB, 300GB or 600GB drives. The SAS HBA in the 7x20 series was called a "Thebe" card, with a part # of 7105394. The 7420, for example, came standard with two of these "Thebe" cards for connecting to the disk trays. Two Thebe cards could handle up to 12 trays, so one would add two more cards to go to 24 trays, or have up to six Thebe cards to handle 36 trays. This card was for external SAS only. It did not connect to the internal OS drives or the Readzillas, both of which used the internal SCSI controller of the server. These Riverwalk 2 trays ARE supported with AK8. You can upgrade your older 7420 or 7320, no problem, as-is. The much older Riverwalk 1 trays or J4400 trays are NOT supported by AK8. However, they were only used by the 7x10 series, and we already said that the 7x10 series was not supported. Here's where it gets tricky. Since last January, we have been selling the new style disk trays. We call them the "DE2-24P" and the "DE2-24C" trays. The "C" tray is for capacity drives, which are 3.5" 3TB or 4TB drives. The "P" trays are for performance drives, which are 2.5" 300GB and 900GB drives. These trays are NOT Riverwalk 2 trays, even though the "C" series may kind of look like it. Different manufacturer and different firmware. They are not new. Like I said, we've been selling them with the 7x20 series since last January. They are the only disk trays we will be selling going forward. Of course, AK8 supports them. So what's the problem? The problem is going to be for people who have to mix drive trays. Remember, your older 7x20 series has Thebe SAS2 HBAs. These have 2 SAS ports per card.  The new ZS3-2 and ZS3-4 systems, however, have the new "Thebe2" SAS2 HBAs. These Thebe2 cards have 4 ports per card. This is very cool, as we can now do more SAS channels with less cards. Instead of needing 4 SAS cards to grow to 24 trays like we did with the old Thebe cards, I can now do 24 trays with only 2 Thebe2 cards. This means more IO slots for fun things like Infiniband and 10G. So far, so good, right? These Thebe2 cards work with any disk tray. You can even mix older DS2 trays with the newer DE2 trays in the same system, as long as you have Thebe2 cards. Ah, there's your problem. You don't have Thebe2 cards in your old 7420, do you? Well, I told you the bad news wasn't that bad, right? We can take out your Thebe cards and replace them with Thebe2. You can then plug your older DS2 trays right back in, and also now get newer DE2 trays going forward. However, it's important that the trays are on different SAS channels. You can mix them in the same system, but not on the same channel. Ask your local SC if you need help with the new cable layout. By the way, the new ZS3-2 and ZS3-4 systems also include a new IO card called "Erie" cards. These are for INTERNAL SAS to the OS drives and the Readzillas. So those are now SAS2 instead of SATA like the older models. Yes, the Erie card uses an IO slot, but that's OK, because the Thebe2 cards allow us to use less SAS HBAs to grow the system, right? That's it. Not too much bad news and really not that bad. AK8 does not support the 7x10 series, and you may need new Thebe2 cards in your older systems if you want to add on newer DE2 trays. I think we can all agree that there are worse things out there. Like our Congress.   Next up.... More good news and cool AK8 tricks. Such as virtual NICS. 

    Read the article

  • Accenture recrute des experts Java/J2EE à Paris, Nantes et Toulouse

    Accenture recrute des experts Java/J2EE A Paris, Nantes et Toulouse Que vous soyez stagiaire, jeune diplômé ou expert, Accenture recrute des développeurs, des ingénieurs d'études ava/J2EE pour sa filiale Accenture Technology Solutions. Les profils recherchés, fonctionnels ou techniques, concernent particulièrement les expertises SAP, Java/J2EE, Test, Oracle, BI Business Intelligence et AMOA. Citation: Rejoignez un groupe international de plus de 240 000...

    Read the article

  • SQLAuthority News Interesting Whitepaper We Loaded 1TB in 30 Minutes with SSIS, and So Can You

    In February 2008, Microsoft announced a record-breaking data load using Microsoft SQL Server Integration Services (SSIS): 1 TB of data in less than 30 minutes. That data load, using SQL Server Integration Services, was 30% faster than the previous best time using a commercial ETL tool. This paper outlines what it took: the software, hardware, [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Accenture recrute pour ses métiers de l'informatique à Paris, Nantes et Toulouse

    Accenture recrute pour ses métiers de l'informatique A Paris, Nantes et ToulouseQue vous soyez stagiaire, jeune diplômé ou professionnel(le) expérimenté(e), Accenture recrute des développeurs (à partir de Bac+2/3) et des ingénieurs d'études (Bac+5) pour sa filiale Accenture Technology Solutions.Les profils recherchés, fonctionnels ou techniques, concernent les expertises SAP, Java/J2EE, Oracle ou la BI. Citation: Rejoignez un groupe international de plus ...

    Read the article

  • Making Those PanelBoxes Behave

    - by Duncan Mills
    I have a little problem to solve earlier this week - misbehaving <af:panelBox> components... What do I mean by that? Well here's the scenario, I have a page fragment containing a set of panelBoxes arranged vertically. As it happens, they are stamped out in a loop but that does not really matter. What I want to be able to do is to provide the user with a simple UI to close and open all of the panelBoxes in concert. This could also apply to showDetailHeader and similar items with a disclosed attrubute, but in this case it's good old panelBoxes.  Ok, so the basic solution to this should be self evident. I can set up a suitable scoped managed bean that the panelBoxes all refer to for their disclosed attribute state. Then the open all / close commandButtons in the UI can simply set the state of that bean for all the panelBoxes to pick up via EL on their disclosed attribute. Sound OK? Well that works basically without a hitch, but turns out that there is a slight problem and this is where the framework is attempting to be a little too helpful. The issue is that is the user manually discloses or hides a panelBox then that will override the value that the EL is setting. So for example. I start the page with all panelBoxes collapsed, all set by the EL state I'm storing on the session I manually disclose panelBox no 1. I press the Expand All button - all works as you would hope and all the panelBoxes are now disclosed, including of course panelBox 1 which I just expanded manually. Finally I press the Collapse All button and everything collapses except that first panelBox that I manually disclosed.  The problem is that the component remembers this manual disclosure and that overrides the value provided by the expression. If I change the viewId (navigate away and back) then the panelBox will start to behave again, until of course I touch it again! Now, the more astute amoungst you would think (as I did) Ah, sound like the MDS personalizaton stuff is getting in the way and the solution should simply be to set the dontPersist attribute to disclosed | ALL. Alas this does not fix the issue.  After a little noodling on the best way to approach this I came up with a solution that works well, although if you think of an alternative way do let me know. The principle is simple. In the disclosureListener for the panelBox I take a note of the clientID of the panelBox component that has been touched by the user along with the state. This all gets stored in a Map of Booleans in ViewScope which is keyed by clientID and stores the current disclosed state in the Boolean value.  The listener looks like this (it's held in a request scope backing bean for the page): public void handlePBDisclosureEvent(DisclosureEvent disclosureEvent) { String clientId = disclosureEvent.getComponent().getClientId(FacesContext.getCurrentInstance()); boolean state = disclosureEvent.isExpanded(); pbState.addTouchedPanelBox(clientId, state); } The pbState variable referenced here is a reference to the bean which will hold the state of the panelBoxes that lives in viewScope (recall that everything is re-set when the viewid is changed so keeping this in viewScope is just fine and cleans things up automatically). The addTouchedPanelBox() method looks like this: public void addTouchedPanelBox(String clientId, boolean state) { //create the cache if needed this is just a Map<String,Boolean> if (_touchedPanelBoxState == null) { _touchedPanelBoxState = new HashMap<String, Boolean>(); } // Simply put / replace _touchedPanelBoxState.put(clientId, state); } So that's the first part, we now have a record of every panelBox that the user has touched. So what do we do when the Collapse All or Expand All buttons are pressed? Here we do some JavaScript magic. Basically for each clientID that we have stored away, we issue a client side disclosure event from JavaScript - just as if the user had gone back and changed it manually. So here's the Collapse All button action: public String CloseAllAction() { submitDiscloseOverride(pbState.getTouchedClientIds(true), false); _uiManager.closeAllBoxes(); return null; }  The _uiManager.closeAllBoxes() method is just manipulating the master-state that all of the panelBoxes are bound to using EL. The interesting bit though is the line:  submitDiscloseOverride(pbState.getTouchedClientIds(true), false); To break that down, the first part is a call to that viewScoped state holder to ask for a list of clientIDs that need to be "tweaked": public String getTouchedClientIds(boolean targetState) { StringBuilder sb = new StringBuilder(); if (_touchedPanelBoxState != null && _touchedPanelBoxState.size() > 0) { for (Map.Entry<String, Boolean> entry : _touchedPanelBoxState.entrySet()) { if (entry.getValue() == targetState) { if (sb.length() > 0) { sb.append(','); } sb.append(entry.getKey()); } } } return sb.toString(); } You'll notice that this method only processes those panelBoxes that will be in the wrong state and returns those as a comma separated list. This is then processed by the submitDiscloseOverride() method: private void submitDiscloseOverride(String clientIdList, boolean targetDisclosureState) { if (clientIdList != null && clientIdList.length() > 0) { FacesContext fctx = FacesContext.getCurrentInstance(); StringBuilder script = new StringBuilder(); script.append("overrideDiscloseHandler('"); script.append(clientIdList); script.append("',"); script.append(targetDisclosureState); script.append(");"); Service.getRenderKitService(fctx, ExtendedRenderKitService.class).addScript(fctx, script.toString()); } } This method constructs a JavaScript command to call a routine called overrideDiscloseHandler() in a script attached to the page (using the standard <af:resource> tag). That method parses out the list of clientIDs and sends the correct message to each one: function overrideDiscloseHandler(clientIdList, newState) { AdfLogger.LOGGER.logMessage(AdfLogger.INFO, "Disclosure Hander newState " + newState + " Called with: " + clientIdList); //Parse out the list of clientIds var clientIdArray = clientIdList.split(','); for (var i = 0; i < clientIdArray.length; i++){ var panelBox = flipPanel = AdfPage.PAGE.findComponentByAbsoluteId(clientIdArray[i]); if (panelBox.getComponentType() == "oracle.adf.RichPanelBox"){ panelBox.broadcast(new AdfDisclosureEvent(panelBox, newState)); } }  }  So there you go. You can see how, with a few tweaks the same code could be used for other components with disclosure that might suffer from the same problem, although I'd point out that the behavior I'm working around here us usually desirable. You can download the running example (11.1.2.2) from here. 

    Read the article

  • VNIC - New feature of AK8 - Working with VNICs

    - by Steve Tunstall
    One of the important new features of the AK8 code is the ability to use multiple IP addresses on the same physical network port. This feature is called VNICs, or Virtual NICs. This allows us to no longer "burn" a whole port in a cluster when one cluster peer owns a network port. Traditionally, we have had to leave Net0 empty on controller 2, because it was used for managing controller 1. Vise-versa for Net1 on Controller 1. Then, if you have data going over 10GigE ports, you probably only had half of your ports running at any given time, and the partner 10GigE port on the other controller just sat there, doing nothing, unless the first controller went down. What a waste. Those days are over.  I want to thank and give a big shout-out to our good partner, OnX Enterprise Solutions, for allowing me to come into their lab and play around with their 7320 to do this demo. They let me make a big mess of their lab for the day as I played around with VNICs. If you're looking for a partner who knows Oracle well and can also piece together a solution from multiple vendors to get you what you need, OnX is a good choice. If you would like to talk to your local OnX rep, you can contact Scott Gill at [email protected] and he can point you in the right direction for your area.  Here we go: Here is what your Datalinks window looks like BEFORE you upgrade to AK8. Here's what the same screen looks like after you upgrade. See the new box? So here is my current network setup. I have my 4 physical interfaces setup each with an IP address. If I ping them, no problems.  So I can ping 180, 181, 251, and 252. However, if I try to ping 240, it does not work, as the 240 address is not being used by any of these interfaces, right?Let's change that. Here, I'm going to make a new Datalink by clicking the Datalink "Plus sign" button. I will check the VNIC box and tell it to use igb2, even though another interface is already using it. Now, I will create a new Interface, and choose "v_dl2" for it's datalink. My new network screen looks like this. A few things to take note of here. First, when I click the "igb2" device, it only highlights dl2 and int2. It does not highlight v_dl2 or v_int2.I think it should, but OK, it looks like VNICs don't highlight when you click the device. Second, note how the underscore character in v_dl2 and v_int2 do not seem to show on this screen. You can see it plainly if you go in and edit them, but from here it looks like a space instead of an underscore. Just a cosmetic bug, but something to be aware of. Now, if I click the VNIC datalink "v_dl2", on the other hand, it DOES highlight the device it belongs to, as it should. Seen here: Note that it did not, however, highlight int2 with it, even though int2 is connected to igb2. That's because we clicked v_dl2, which int2 has nothing to do with. So I'm OK with that. So let's try pinging 240 now. Of course, it works great.  So I now make another VNIC, and call it v_dl3 using igb3, and v_int3 with an address of 241. I then setup three shares, using ports 251, 240, and 241.Remember that IP 251 and 240 both are using the same physical port of igb2, and IP 241 is using port igb3. Next, I copy a folder full of stuff over to all three shares at the same time. I have analytics going so I can see the traffic. My top chart is showing the logical interfaces, and the bottom chart is showing the physical ports.Sure enough, look at the igb2 and vnic1 interfaces. They equal the traffic going over the igb2 physical port on the second chart. VNIC2, on the other hand, gets igb3 all to itself. This would work the same way with 10Gig or Infiniband ports. You can now have multiple IP addresses and even completely different subnets sharing the same physical ports. You may need to make route table entries for that. This allows us to use all of the ports you paid for with no more waste.  Very, very cool.  One small "bug" I found when doing this. It's really not a bug, it was designed to do this when VNICs were not around. But now that we have NVIC capability, they should probably change this. I've alerted the engineering team about this and they're looking into it, so perhaps it will be fixed in a later code. Here it is. Remember when we made the new VNIC datalink, I specifically said to click on the "Plus Sign" button to create it? I don't always do that. I really like to use the drag-and-drop method to create my datalinks in the network screen.HOWEVER, if you were to do that for building a VNIC, it will mess you up a little. Watch this. Here, I'm dragging igb3 over to make a new datalink. igb3 is already being used by dl3, but I'm going to make this a VNIC, so who cares, right? Well, the ZFSSA does not KNOW you are going to make it a VNIC, now does it? So... it works as designed and REMOVES the igb3 device from the current dl3 datalink in the background. See how it's now missing? At the same time, the dl3 datalink choice is missing from my list of possible VNICs for me to choose from!!!! Hey!!! I wanted to pick dl3. Why isn't it on the list??? Well, it can't be on this list because dl3 no longer has a device associated with it. Bummer for you. When you click cancel, the device is still missing from dl3. The fix is easy. Just edit dl3 by clicking the pencil button, do absolutely nothing, and click "Apply". The device will magically come back. Now, make the VNIC datalink by clicking the "Plus Sign" button. Sure enough, once you check the VNIC box, dl3 is a valid choice. No problem.  That's it for now. Have fun with VNICs.

    Read the article

  • How to Plug a Small Hole in NetBeans JSF (Join Table) Code Generation

    - by MarkH
    I was asked recently to provide an assist with designing and building a small-but-vital application that had at its heart some basic CRUD (Create, Read, Update, & Delete) functionality, built upon an Oracle database, to be accessible from various locations. Working from the stated requirements, I fleshed out the basic application and database designs and, once validated, set out to complete the first iteration for review. Using SQL Developer, I created the requisite tables, indices, and sequences for our first run. One of the tables was a many-to-many join table with three fields: one a primary key for that table, the other two being primary keys for the other tables, represented as foreign keys in the join table. Here is a simplified example of the trio of tables: Once the database was in decent shape, I fired up NetBeans to let it have first shot at the code. NetBeans does a great job of generating a mountain of essential code, saving developers what must be millions of hours of effort each year by building a basic foundation with a few clicks and keystrokes. Lest you think it (or any tool) can do everything for you, however, occasionally something tosses a paper clip into the delicate machinery and makes you open things up to fix them. Join tables apparently qualify.  :-) In the case above, the entity class generated for the join table (New Entity Classes from Database) included an embedded object consisting solely of the two foreign key fields as attributes, in addition to an object referencing each one of the "component" tables. The Create page generated (New JSF Pages from Entity Classes) worked well to a point, but when trying to save, we were greeted with an error: Transaction aborted. Hmm. A quick debugger session later and I'd identified the issue: when trying to persist the new join-table object, the embedded "foreign-keys-only" object still had null values for its two (required value) attributes...even though the embedded table objects had populated key attributes. Here's the simple fix: In the join-table controller class, find the public String create() method. It will look something like this:     public String create() {        try {            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } To restore balance to the force, modify the create() method as follows (changes in red):     public String create() {         try {            // Add the next two lines to resolve:            current.getJoinEntityPK().setTbl1id(current.getTbl1().getId().toBigInteger());            current.getJoinEntityPK().setTbl2id(current.getTbl2().getId().toBigInteger());            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } I'll be refactoring this code shortly, but for now, it works. Iteration one is complete and being reviewed, and we've met the milestone. Here's to happy endings (and customers)! All the best,Mark

    Read the article

  • Learn SSIS in London 12-14 Sep 2012!

    - by andyleonard
    My friends at TechniTrain , the students, and I had a blast during the March 2012 London delivery of From Zero To SSIS ! We have decided to do it again in September 2012 with my new Learning SSIS 2012 3-day course ! Please find a course outline here . It is difficult to list everything I cover in the course, but the outline hits the high spots. This material grew out of my experiences serving as a consultant on short-term engagements and as a manager (and enterprise ETL architect) for a team of forty...(read more)

    Read the article

  • SQL Server 2008 and 2008 R2 Integration Services - Managing Local Processes Using Script Task

    SQL Server 2008 R2 Integration Services includes a number of predefined tasks that implement common administrative actions to help with data extraction, transformation and loading (ETL). While in a majority of cases they are sufficient to deliver required functionality, there might be situations where an extra level of flexibility is desired. NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >