Search Results

Search found 5685 results on 228 pages for 'encrypted partition'.

Page 217/228 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Protecting Cookies: Once and For All

    - by Your DisplayName here!
    Every once in a while you run into a situation where you need to temporarily store data for a user in a web app. You typically have two options here – either store server-side or put the data into a cookie (if size permits). When you need web farm compatibility in addition – things become a little bit more complicated because the data needs to be available on all nodes. In my case I went for a cookie – but I had some requirements Cookie must be protected from eavesdropping (sent only over SSL) and client script Cookie must be encrypted and signed to be protected from tampering with Cookie might become bigger than 4KB – some sort of overflow mechanism would be nice I really didn’t want to implement another cookie protection mechanism – this feels wrong and btw can go wrong as well. WIF to the rescue. The session management feature already implements the above requirements but is built around de/serializing IClaimsPrincipals into cookies and back. But if you go one level deeper you will find the CookieHandler and CookieTransform classes which contain all the needed functionality. public class ProtectedCookie {     private List<CookieTransform> _transforms;     private ChunkedCookieHandler _handler = new ChunkedCookieHandler();     // DPAPI protection (single server)     public ProtectedCookie()     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new ProtectedDataCookieTransform()             };     }     // RSA protection (load balanced)     public ProtectedCookie(X509Certificate2 protectionCertificate)     {         _transforms = new List<CookieTransform>             {                 new DeflateCookieTransform(),                 new RsaSignatureCookieTransform(protectionCertificate),                 new RsaEncryptionCookieTransform(protectionCertificate)             };     }     // custom transform pipeline     public ProtectedCookie(List<CookieTransform> transforms)     {         _transforms = transforms;     }     public void Write(string name, string value, DateTime expirationTime)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, expirationTime);     }     public void Write(string name, string value, DateTime expirationTime, string domain, string path)     {         byte[] encodedBytes = EncodeCookieValue(value);         _handler.Write(encodedBytes, name, path, domain, expirationTime, true, true, HttpContext.Current);     }     public string Read(string name)     {         var bytes = _handler.Read(name);         if (bytes == null || bytes.Length == 0)         {             return null;         }         return DecodeCookieValue(bytes);     }     public void Delete(string name)     {         _handler.Delete(name);     }     protected virtual byte[] EncodeCookieValue(string value)     {         var bytes = Encoding.UTF8.GetBytes(value);         byte[] buffer = bytes;         foreach (var transform in _transforms)         {             buffer = transform.Encode(buffer);         }         return buffer;     }     protected virtual string DecodeCookieValue(byte[] bytes)     {         var buffer = bytes;         for (int i = _transforms.Count; i > 0; i—)         {             buffer = _transforms[i - 1].Decode(buffer);         }         return Encoding.UTF8.GetString(buffer);     } } HTH

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 7-13, 2012

    - by Bob Rhubart
    The Top 10 items shared via the OTN ArchBeat Facebook page for the week of October 7-13, 2012. OOW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Adaptive ADF/WebCenter template for the iPad | Maiko Rocha Oracle Fusion Middleware A-Team member Maiko Rocha responds to a a customer request for information about how to create an adaptive iPad template for their WebCenter Portal application, "a specific template to streamline their workflow on the iPad." Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. WebCenter Sites Gadget Development Concepts Quickstart | John Brunswick What are Gadgets? "At their most basic level they can be thought of as lightweight portlets that run largely on the client side of an architecture," says John Brunswick. "Gadgets provide a cross-platform container to run reusable UI modules that generally expose dynamic information to an end user, allowing for some level of end user customization." Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call Academies. "These indexes contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Fusion Applications Technical Tips | Naveen Nahata "Setting memory parameters for Admin and Managed servers of various domains in Fusion Applications can be, let us say, a little daunting," says Oracle Fusion Middleware A-Team member Naveen Nahata. "While all this may look complicated and intimidating, it is actually relatively simple once you understand how it all works." Updated Agenda for OTN Architect Day Los Angeles (Oct 25) In less than two weeks Oracle Architect Day rolls into Los Angeles, with a full slate of sessions devoted to cloud computing, engineered systems, and SOA. Follow the link for the updated event agenda. ORCLville: OOW 2012 - A Not So Brief Recap Oracle ACE Director Floyd Teter, an Applications & Apps Technology specialists, shares his personal, frank, and and extensive recap or Oracle OpenWorld 2012. SOA Suite create partition in Enterprise Manager | Peter Paul van de Beek "In Oracle SOA Suite 10g, or more specific BPEL 10g, one could group functionality in domains," says Peter Paul van de Beek. "This feature has been away in the early versions of SOA Suite 11g. They have returned in more recent version and can be used for all SCA composites (instead of BPEL only). Nowadays these 10g domains are called partitions." Thought for the Day "I strive for an architecture from which nothing can be taken away." — Helmut Jahn Source: BrainyQuote.com

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • obtaining nimbuzz server certificate for nmdecrypt expert in NetMon

    - by lurscher
    I'm using Network Monitor 3.4 with the nmdecrypt expert. I'm opening a nimbuzz conversation node in the conversation window and i click Expert- nmDecrpt - run Expert that shows up a window where i have to add the server certificate. I am not sure how to retrieve the server certificate for nimbuzz XMPP chat service. Any idea how to do this? this question is a follow up question of this one. Edit for some background so it might be that this is encrypted with the server pubkey and i cannot retrieve the message, unless i debug the native binary and try to intercept the encryption code. I have a test client (using agsXMPP) that is able to connect with nimbuzz with no problems. the only thing that is not working is adding invisible mode. It seems this is some packet sent from the official client during login which i want to obtain. any suggestions to try to grab this info would be greatly appreciated. Maybe i should get myself (and learn) IDA pro? This is what i get inspecting the TLS frames on Network Monitor: Frame: Number = 81, Captured Frame Length = 769, MediaType = ETHERNET + Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[...],SourceAddress:[....] + Ipv4: Src = ..., Dest = 192.168.2.101, Next Protocol = TCP, Packet ID = 9939, Total IP Length = 755 - Tcp: Flags=...AP..., SrcPort=5222, DstPort=3578, PayloadLen=715, Seq=4101074854 - 4101075569, Ack=1127356300, Win=4050 (scale factor 0x0) = 4050 SrcPort: 5222 DstPort: 3578 SequenceNumber: 4101074854 (0xF4716FA6) AcknowledgementNumber: 1127356300 (0x4332178C) + DataOffset: 80 (0x50) + Flags: ...AP... Window: 4050 (scale factor 0x0) = 4050 Checksum: 0x8841, Good UrgentPointer: 0 (0x0) TCPPayload: SourcePort = 5222, DestinationPort = 3578 TLSSSLData: Transport Layer Security (TLS) Payload Data - TLS: TLS Rec Layer-1 HandShake: Server Hello.; TLS Rec Layer-2 HandShake: Certificate.; TLS Rec Layer-3 HandShake: Server Hello Done. - TlsRecordLayer: TLS Rec Layer-1 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 42 (0x2A) - SSLHandshake: SSL HandShake ServerHello(0x02) HandShakeType: ServerHello(0x02) Length: 38 (0x26) - ServerHello: 0x1 + Version: TLS 1.0 + RandomBytes: SessionIDLength: 0 (0x0) TLSCipherSuite: TLS_RSA_WITH_AES_256_CBC_SHA { 0x00, 0x35 } CompressionMethod: 0 (0x0) - TlsRecordLayer: TLS Rec Layer-2 HandShake: ContentType: HandShake: - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 654 (0x28E) - SSLHandshake: SSL HandShake Certificate(0x0B) HandShakeType: Certificate(0x0B) Length: 650 (0x28A) - Cert: 0x1 CertLength: 647 (0x287) - Certificates: CertificateLength: 644 (0x284) - X509Cert: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: - TbsCertificate: Issuer: nimbuzz.com,Nimbuzz,NL, Subject: nimbuzz.com,Nimbuzz,NL + SequenceHeader: + Tag0: + Version: (2) + SerialNumber: -1018418383 + Signature: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Issuer: nimbuzz.com,Nimbuzz,NL - RdnSequence: nimbuzz.com,Nimbuzz,NL + SequenceOfHeader: 0x1 + Name: NL + Name: Nimbuzz + Name: nimbuzz.com + Validity: From: 02/22/10 20:22:32 UTC To: 02/20/20 20:22:32 UTC + Subject: nimbuzz.com,Nimbuzz,NL - SubjectPublicKeyInfo: RsaEncryption (1.2.840.113549.1.1.1) + SequenceHeader: + Algorithm: RsaEncryption (1.2.840.113549.1.1.1) - SubjectPublicKey: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 141, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 141 bytes BitString: + Tag3: + Extensions: - SignatureAlgorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - SequenceHeader: - AsnId: Sequence and SequenceOf types (Universal 16) + LowTag: - AsnLen: Length = 13, LengthOfLength = 0 Length: 13 bytes, LengthOfLength = 0 + Algorithm: Sha1WithRSAEncryption (1.2.840.113549.1.1.5) - Parameters: Null Value - Sha1WithRSAEncryption: Null Value + AsnNullHeader: - Signature: - AsnBitStringHeader: - AsnId: BitString type (Universal 3) - LowTag: Class: (00......) Universal (0) Type: (..0.....) Primitive TagValue: (...00011) 3 - AsnLen: Length = 129, LengthOfLength = 1 LengthType: LengthOfLength = 1 Length: 129 bytes BitString: + TlsRecordLayer: TLS Rec Layer-3 HandShake:

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Same SELECT used in an INSERT has different execution plan

    - by amacias
    A customer complained that a query and its INSERT counterpart had different execution plans, and of course, the INSERT was slower. First lets look at the SELECT : SELECT ua_tr_rundatetime,        ua_ch_treatmentcode,        ua_tr_treatmentcode,        ua_ch_cellid,        ua_tr_cellid FROM   (SELECT DISTINCT CH.treatmentcode AS UA_CH_TREATMENTCODE,                         CH.cellid        AS UA_CH_CELLID         FROM    CH,                 DL         WHERE  CH.contactdatetime > SYSDATE - 5                AND CH.treatmentcode = DL.treatmentcode) CH_CELLS,        (SELECT DISTINCT T.treatmentcode AS UA_TR_TREATMENTCODE,                         T.cellid        AS UA_TR_CELLID,                         T.rundatetime   AS UA_TR_RUNDATETIME         FROM    T,                 DL         WHERE  T.treatmentcode = DL.treatmentcode) TRT_CELLS WHERE  CH_CELLS.ua_ch_treatmentcode(+) = TRT_CELLS.ua_tr_treatmentcode;  The query has 2 DISTINCT subqueries.  The execution plan shows one with DISTICT Placement transformation applied and not the other. The view in Step 5 has the prefix VW_DTP which means DISTINCT Placement. -------------------------------------------------------------------- | Id  | Operation                    | Name            | Cost (%CPU) -------------------------------------------------------------------- |   0 | SELECT STATEMENT             |                 |   272K(100) |*  1 |  HASH JOIN OUTER             |                 |   272K  (1) |   2 |   VIEW                       |                 |  4408   (1) |   3 |    HASH UNIQUE               |                 |  4408   (1) |*  4 |     HASH JOIN                |                 |  4407   (1) |   5 |      VIEW                    | VW_DTP_48BAF62C |  1660   (2) |   6 |       HASH UNIQUE            |                 |  1660   (2) |   7 |        TABLE ACCESS FULL     | DL              |  1644   (1) |   8 |      TABLE ACCESS FULL       | T               |  2744   (1) |   9 |   VIEW                       |                 |   267K  (1) |  10 |    HASH UNIQUE               |                 |   267K  (1) |* 11 |     HASH JOIN                |                 |   267K  (1) |  12 |      PARTITION RANGE ITERATOR|                 |   266K  (1) |* 13 |       TABLE ACCESS FULL      | CH              |   266K  (1) |  14 |      TABLE ACCESS FULL       | DL              |  1644   (1) -------------------------------------------------------------------- Query Block Name / Object Alias (identified by operation id): -------------------------------------------------------------    1 - SEL$1    2 - SEL$AF418D5F / TRT_CELLS@SEL$1    3 - SEL$AF418D5F    5 - SEL$F6AECEDE / VW_DTP_48BAF62C@SEL$48BAF62C    6 - SEL$F6AECEDE    7 - SEL$F6AECEDE / DL@SEL$3    8 - SEL$AF418D5F / T@SEL$3    9 - SEL$2        / CH_CELLS@SEL$1   10 - SEL$2   13 - SEL$2        / CH@SEL$2   14 - SEL$2        / DL@SEL$2 Predicate Information (identified by operation id): ---------------------------------------------------    1 - access("CH_CELLS"."UA_CH_TREATMENTCODE"="TRT_CELLS"."UA_TR_TREATMENTCODE")    4 - access("T"."TREATMENTCODE"="ITEM_1")   11 - access("CH"."TREATMENTCODE"="DL"."TREATMENTCODE")   13 - filter("CH"."CONTACTDATETIME">SYSDATE@!-5) The outline shows PLACE_DISTINCT(@"SEL$3" "DL"@"SEL$3") indicating that the QB3 is the one that got the transformation. Outline Data -------------   /*+       BEGIN_OUTLINE_DATA       IGNORE_OPTIM_EMBEDDED_HINTS       OPTIMIZER_FEATURES_ENABLE('11.2.0.3')       DB_VERSION('11.2.0.3')       ALL_ROWS       OUTLINE_LEAF(@"SEL$2")       OUTLINE_LEAF(@"SEL$F6AECEDE")       OUTLINE_LEAF(@"SEL$AF418D5F") PLACE_DISTINCT(@"SEL$3" "DL"@"SEL$3")       OUTLINE_LEAF(@"SEL$1")       OUTLINE(@"SEL$48BAF62C")       OUTLINE(@"SEL$3")       NO_ACCESS(@"SEL$1" "TRT_CELLS"@"SEL$1")       NO_ACCESS(@"SEL$1" "CH_CELLS"@"SEL$1")       LEADING(@"SEL$1" "TRT_CELLS"@"SEL$1" "CH_CELLS"@"SEL$1")       USE_HASH(@"SEL$1" "CH_CELLS"@"SEL$1")       FULL(@"SEL$2" "CH"@"SEL$2")       FULL(@"SEL$2" "DL"@"SEL$2")       LEADING(@"SEL$2" "CH"@"SEL$2" "DL"@"SEL$2")       USE_HASH(@"SEL$2" "DL"@"SEL$2")       USE_HASH_AGGREGATION(@"SEL$2")       NO_ACCESS(@"SEL$AF418D5F" "VW_DTP_48BAF62C"@"SEL$48BAF62C")       FULL(@"SEL$AF418D5F" "T"@"SEL$3")       LEADING(@"SEL$AF418D5F" "VW_DTP_48BAF62C"@"SEL$48BAF62C" "T"@"SEL$3")       USE_HASH(@"SEL$AF418D5F" "T"@"SEL$3")       USE_HASH_AGGREGATION(@"SEL$AF418D5F")       FULL(@"SEL$F6AECEDE" "DL"@"SEL$3")       USE_HASH_AGGREGATION(@"SEL$F6AECEDE")       END_OUTLINE_DATA   */ The 10053 shows there is a comparative of cost with and without the transformation. This means the transformation belongs to Cost-Based Query Transformations (CBQT). In SEL$3 the optimization of the query block without the transformation is 6659.73 and with the transformation is 4408.41 so the transformation is kept. GBP/DP: Checking validity of GBP/DP for query block SEL$3 (#3) DP: Checking validity of distinct placement for query block SEL$3 (#3) DP: Using search type: linear DP: Considering distinct placement on query block SEL$3 (#3) DP: Starting iteration 1, state space = (5) : (0) DP: Original query DP: Costing query block. DP: Updated best state, Cost = 6659.73 DP: Starting iteration 2, state space = (5) : (1) DP: Using DP transformation in this iteration. DP: Transformed query DP: Costing query block. DP: Updated best state, Cost = 4408.41 DP: Doing DP on the original QB. DP: Doing DP on the preserved QB. In SEL$2 the cost without the transformation is less than with it so it is not kept. GBP/DP: Checking validity of GBP/DP for query block SEL$2 (#2) DP: Checking validity of distinct placement for query block SEL$2 (#2) DP: Using search type: linear DP: Considering distinct placement on query block SEL$2 (#2) DP: Starting iteration 1, state space = (3) : (0) DP: Original query DP: Costing query block. DP: Updated best state, Cost = 267936.93 DP: Starting iteration 2, state space = (3) : (1) DP: Using DP transformation in this iteration. DP: Transformed query DP: Costing query block. DP: Not update best state, Cost = 267951.66 To the same query an INSERT INTO is added and the result is a very different execution plan. INSERT  INTO cc               (ua_tr_rundatetime,                ua_ch_treatmentcode,                ua_tr_treatmentcode,                ua_ch_cellid,                ua_tr_cellid)SELECT ua_tr_rundatetime,       ua_ch_treatmentcode,       ua_tr_treatmentcode,       ua_ch_cellid,       ua_tr_cellidFROM   (SELECT DISTINCT CH.treatmentcode AS UA_CH_TREATMENTCODE,                        CH.cellid        AS UA_CH_CELLID        FROM    CH,                DL        WHERE  CH.contactdatetime > SYSDATE - 5               AND CH.treatmentcode = DL.treatmentcode) CH_CELLS,       (SELECT DISTINCT T.treatmentcode AS UA_TR_TREATMENTCODE,                        T.cellid        AS UA_TR_CELLID,                        T.rundatetime   AS UA_TR_RUNDATETIME        FROM    T,                DL        WHERE  T.treatmentcode = DL.treatmentcode) TRT_CELLSWHERE  CH_CELLS.ua_ch_treatmentcode(+) = TRT_CELLS.ua_tr_treatmentcode;----------------------------------------------------------| Id  | Operation                     | Name | Cost (%CPU)----------------------------------------------------------|   0 | INSERT STATEMENT              |      |   274K(100)|   1 |  LOAD TABLE CONVENTIONAL      |      |            |*  2 |   HASH JOIN OUTER             |      |   274K  (1)|   3 |    VIEW                       |      |  6660   (1)|   4 |     SORT UNIQUE               |      |  6660   (1)|*  5 |      HASH JOIN                |      |  6659   (1)|   6 |       TABLE ACCESS FULL       | DL   |  1644   (1)|   7 |       TABLE ACCESS FULL       | T    |  2744   (1)|   8 |    VIEW                       |      |   267K  (1)|   9 |     SORT UNIQUE               |      |   267K  (1)|* 10 |      HASH JOIN                |      |   267K  (1)|  11 |       PARTITION RANGE ITERATOR|      |   266K  (1)|* 12 |        TABLE ACCESS FULL      | CH   |   266K  (1)|  13 |       TABLE ACCESS FULL       | DL   |  1644   (1)----------------------------------------------------------Query Block Name / Object Alias (identified by operation id):-------------------------------------------------------------   1 - SEL$1   3 - SEL$3 / TRT_CELLS@SEL$1   4 - SEL$3   6 - SEL$3 / DL@SEL$3   7 - SEL$3 / T@SEL$3   8 - SEL$2 / CH_CELLS@SEL$1   9 - SEL$2  12 - SEL$2 / CH@SEL$2  13 - SEL$2 / DL@SEL$2Predicate Information (identified by operation id):---------------------------------------------------   2 - access("CH_CELLS"."UA_CH_TREATMENTCODE"="TRT_CELLS"."UA_TR_TREATMENTCODE")   5 - access("T"."TREATMENTCODE"="DL"."TREATMENTCODE")  10 - access("CH"."TREATMENTCODE"="DL"."TREATMENTCODE")  12 - filter("CH"."CONTACTDATETIME">SYSDATE@!-5)Outline Data-------------  /*+      BEGIN_OUTLINE_DATA      IGNORE_OPTIM_EMBEDDED_HINTS      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')      DB_VERSION('11.2.0.3')      ALL_ROWS      OUTLINE_LEAF(@"SEL$2")      OUTLINE_LEAF(@"SEL$3")      OUTLINE_LEAF(@"SEL$1")      OUTLINE_LEAF(@"INS$1")      FULL(@"INS$1" "CC"@"INS$1")      NO_ACCESS(@"SEL$1" "TRT_CELLS"@"SEL$1")      NO_ACCESS(@"SEL$1" "CH_CELLS"@"SEL$1")      LEADING(@"SEL$1" "TRT_CELLS"@"SEL$1" "CH_CELLS"@"SEL$1")      USE_HASH(@"SEL$1" "CH_CELLS"@"SEL$1")      FULL(@"SEL$2" "CH"@"SEL$2")      FULL(@"SEL$2" "DL"@"SEL$2")      LEADING(@"SEL$2" "CH"@"SEL$2" "DL"@"SEL$2")      USE_HASH(@"SEL$2" "DL"@"SEL$2")      USE_HASH_AGGREGATION(@"SEL$2")      FULL(@"SEL$3" "DL"@"SEL$3")      FULL(@"SEL$3" "T"@"SEL$3")      LEADING(@"SEL$3" "DL"@"SEL$3" "T"@"SEL$3")      USE_HASH(@"SEL$3" "T"@"SEL$3")      USE_HASH_AGGREGATION(@"SEL$3")      END_OUTLINE_DATA  */ There is no DISTINCT Placement view and no hint.The 10053 trace shows a new legend "DP: Bypassed: Not SELECT"implying that this is a transformation that it is possible only for SELECTs. GBP/DP: Checking validity of GBP/DP for query block SEL$3 (#4) DP: Checking validity of distinct placement for query block SEL$3 (#4) DP: Bypassed: Not SELECT. GBP/DP: Checking validity of GBP/DP for query block SEL$2 (#3) DP: Checking validity of distinct placement for query block SEL$2 (#3) DP: Bypassed: Not SELECT. In 12.1 (and hopefully in 11.2.0.4 when released) the restriction on applying CBQT to some DMLs and DDLs (like CTAS) is lifted.This is documented in BugTag Note:10013899.8 Allow CBQT for some DML / DDLAnd interestingly enough, it is possible to have a one-off patch in 11.2.0.3. SQL> select DESCRIPTION,OPTIMIZER_FEATURE_ENABLE,IS_DEFAULT     2  from v$system_fix_control where BUGNO='10013899'; DESCRIPTION ---------------------------------------------------------------- OPTIMIZER_FEATURE_ENABLE  IS_DEFAULT ------------------------- ---------- enable some transformations for DDL and DML statements 11.2.0.4                           1

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • SQL Server 2000 and SSL Encryption

    - by Angry_IT_Guru
    We are a datacenter that hsots a SQL Server 2000 environment which provides database services for a product we sell that is loaded as a rich-client applicatin at each of our many clients and their workstations. Currently today, the application uses straight ODBC connections from the client site to our datacenter. We need to begin encrypting the credentials -- since everything is clear-text today and the authentication is weakly encrypted -- and I'm trying to determine the best way to implement SSL on the server with minimizing the impact of the client. A few things, however: 1) We have our own Windows domain and all our servers are joined to our private domain. Our clietns no nothing of our domain. 2) Typically, our clients connect to our datacenter servers either by: a) Using TCP/IP address b) Using a DNS name that we publish via internet, zone transfers from our DNS servers to our customers, or the client can add static HOSTS entries. 3) From what I understand from enabling encryption is that I can go to the Network Utility and select the "encryption" option for the protocol that I wish to encrypt. Such as TCP/IP. 4) When the encryption option is selected, I have a choice of installing a third-party certificate or a self-signed. I have tested the self-signed, but do have potential issues. I'll explain in a bit. If I go with a third-party cert, such as Verisign, or Network solutions... what kind of certificate do I request? These aren't IIS certificates? When I go create a self-signed via Microsoft's certificate server, I have to select "Authentication certificate". What does this translate to in the third-party world? 5) If I create a self-signed certificate, I understand that the "issue to" name has to match the FQDN for the server that is running SQL. In my case, I have to use my private domain name. If I use this, what does this do for my clients when trying to connect to my SQL Server? Surely they cannot resolve my private DNS names on their network.... I've also verified that when the self-signed certificate is installed, it has to be in the local personal store for the user account that is running SQL Server. SQL Server will only start if the FQDN matches the "issue to" of the certificate and SQL is running under the account that has the certificate installed. If I use a self-signed certificate, does this mean I have to have every one of my clients install it to verify? 6) If I used a third-party certificate, which sounds like the best option, do all my clients have to have internet access when accessing my private servers of their private WAN connection to use to verify the certificate? What do I do about the FQDN? It sounds like they have to use my private domain name -- which is not published -- and can no longer use the one that I setup for them to use? 7) I plan on upgrading to SQL 2000 soon. Is setup of SSL any easier/better with SQL 2005 than SQL 2000? Any help or guiadance would be appreciated

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Windows 7, file properties - Is "date accessed" ALWAYS 100% accurate?

    - by Robert
    Hello, Here's the situation: I went on vacation for a couple of weeks, but before I left, I took the harddrive out of my computer and hid it in a different location. Upon coming back on Monday and putting the harddrive back in my computer, I right-clicked on different files to see their properties. Interestingly enough, several files had been accessed during the time I was gone! I right-clicked different files in various locations on the harddrive, and all of these suspect files had been accessed within a certain time range (Sunday, ?January ?09, ?2011, approximately ??between 6:52:16 PM - 7:16:25 PM). Some of them had been accessed at the exact same time--down to the very second. This makes me think that someone must have done a search on my harddrive for certain types of files and then copied all those files to some other medium. The Windows 7 installation on this harddrive is password protected, but NOT encrypted, so they could have easily put the harddrive into an enclosure/toaster to access it from a different computer. Of course I did not right-click every single file on my computer, but did so in different folders. For instance, one of the folders I went through has different types of files: .mp3, ,prproj, .3gp, .mpg, .wmv, .xmp, .txt with file-sizes ranging from 2 KB to 29.7 MB (there is also a sub-folder in this folder which contains only .jpg files); however, of all these different types of files in this folder and its subfolder, all of them had been accessed (including the .jpg files from the sub-folder) EXCEPT the .mp3 files (if it makes any difference, the .mp3 files in this folder range in size from 187 KB to 4881 KB). Additionally, this sub-folder which contained only .jpg files (48 .jpg files to be exact) was not accessed during this time--only the .jpg files within it were accessed-- (between 6:57:03 PM - 6:57:08 PM). I thought that perhaps this was some kind of Windows glitch that was displaying the wrong access date, but then I looked at the "date created" and "date modified" for all of these files in question, and their created/modified dates and times were spot on correct. My first thought was that someone put the harddrive into an enclosure/toaster and viewed the files; but then I realized that this was impossible because several of the files had been accessed at the same exact time down to the second. So this made me think that the only other way the "date accessed" could have changed would have been if someone copied the files. Is there any chance at all whatsoever that this is some kind of Windows glitch or something, or is it a fact that someone was indeed accessing my files (and if someone was accessing my files, am I right about the files in question having been copied)? Is there any other possibility for what could have happened? Do I need to use any kinds of forensics tools to further investigate this matter (and if so, which tools), or is there any other way in which I can be certain of what took place in that timeframe the day before I got back? Or is what I see with Windows 7 good enough (i.e. accurate and truthful)? Thanks in advance, and please let me know if any other details are required on my part.

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • Hive Based Registry in Flash

    - by Psychic
    To start with I'll say I've read the post here and I'm still having trouble. I'm trying to create a CE6 image with a hive-based registry that actually stores results through a reboot. I've ticked the hive settings in the catalog items. In common.reg, I've set the location of the hive ([HKEY_LOCAL_MACHINE\init\BootVars] "SystemHive") to "Hard Drive\Registry" (Note: the flash shows up as a device called "Hard Drive") In common.reg, I've set "Flags"=dword:3 in the same place to get the device manager loaded along with the storage manager I've verified that these settings are wrapped in "; HIVE BOOT SECTION" This is where it starts to fall over. It all compiles fine, but on the target system, when it boots, I get: A directory, called "Hard Disk" where a registry is put A device, name called "Hard Disk2" where the permanent flash is Any changes made to the registry are lost on a reboot What am I still missing? Why is the registry not being stored on the flash? Strangly, if I create a random file/directory in the registry directory, it is still there after a reboot, so even though this directory isn't on the other partition (where I tried to put it), it does appear to be permanent. If it is permanent, why don't registry settings save (ie Ethernet adapter IP addresses?) I'm not using any specific profiles, so I'm at a loss as to what the last step is to make this hive registry a permanent store.

    Read the article

  • A couple of questions on exceptions/flow control and the application of custom exceptions

    - by dotnetdev
    1) Custom exceptions can help make your intentions clear. How can this be? The intention is to handle or log the exception, regardless of whether the type is built-in or custom. The main reason I use custom exceptions is to not use one exception type to cover the same problem in different contexts (eg parameter is null in system code which may be effect by an external factor and an empty shopping basket). However, the partition between system and business-domain code and using different exception types seems very obvious and not making the most of custom exceptions. Related to this, if custom exceptions cover the business exceptions, I could also get all the places which are sources for exceptions at the business domain level using "Find all references". Is it worth adding exceptions if you check the arguments in a method for being null, use them a few times, and then add the catch? Is it a realistic risk that an external factor or some other freak cause could cause the argument to be null after being checked anyway? 2) What does it mean when exceptions should not be used to control the flow of programs and why not? I assume this is like: if (exceptionVariable != null) { } Is it generally good practise to fill every variable in an exception object? As a developer, do you expect every possible variable to be filled by another coder?

    Read the article

  • People not respecting good practices at workplace

    - by VexXtreme
    Hi There are some major issues in my company regarding practices, procedures and methodologies. First of all, we're a small firm and there are only 3-4 developers, one of which is our boss who isn't really a programmer, he just chimes in now and then and tries to do code some simple things. The biggest problems are: Major cowboy coding and lack of methodologies. I've tried explaining to everyone the benefits of TDD and unit testing, but I only got weird looks as if I'm talking nonsense. Even the boss gave me the reaction along the lines of "why do we need that? it's just unnecessary overhead and a waste of time". Nobody uses design patterns. I have to tell people not to write business logic in code behind, I have to remind them not to hardcode concrete implementations and dependencies into classes and cetera. I often feel like a nazi because of this and people think I'm enforcing unnecessary policies and use of design patterns. The biggest problem of all is that people don't even respect common sense security policies. I've noticed that college students who work on tech support use our continuous integration and source control server as a dump to store their music, videos, series they download from torrents and so on. You can imagine the horror when I realized that most of the partition reserved for source control backups was used by entire seasons of TV series and movies. Our development server isn't even connected to an UPS and surge protection. It's just plugged straight into the wall outlet. I asked the boss to buy surge protection, but he said it's unnecessary. All in all, I like working here because the atmosphere is very relaxed, money is good and we're all like a family (so don't advise me to quit), but I simply don't know how to explain to people that they need to stick to some standards and good practices in IT industry and that they can't behave so irresponsibly. Thanks for the advice

    Read the article

  • What's this error messages mean?

    - by flybirdtt
    03-20 11:41:28.103: ERROR/vold(550): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 03-20 11:41:28.103: ERROR/vold(550): Error bootstrapping switch '/sys/class/switch/test2' (m) 03-20 11:41:28.103: ERROR/vold(550): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 03-20 11:41:28.103: ERROR/vold(550): Error bootstrapping switch '/sys/class/switch/test' (m) 03-20 11:41:28.153: ERROR/flash_image(557): can't find recovery partition 03-20 11:41:51.493: ERROR/MemoryHeapBase(585): error opening /dev/pmem: No such file or directory 03-20 11:41:51.503: ERROR/SurfaceFlinger(585): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 03-20 11:41:51.572: ERROR/GLLogger(585): couldn't load library (Cannot find library) 03-20 11:41:52.144: ERROR/GLLogger(585): couldn't load library (Cannot find library) 03-20 11:41:59.913: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/usb/online' 03-20 11:41:59.923: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/battery/batt_vol' 03-20 11:41:59.943: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/battery/batt_temp' 03-20 11:42:00.532: ERROR/EventHub(585): could not get driver version for /dev/input/mouse0, Not a typewriter 03-20 11:42:00.563: ERROR/EventHub(585): could not get driver version for /dev/input/mice, Not a typewriter 03-20 11:42:00.793: ERROR/System(585): Failure starting core service 03-20 11:42:00.793: ERROR/System(585): java.lang.SecurityException 03-20 11:42:00.793: ERROR/System(585): at android.os.BinderProxy.transact(Native Method) 03-20 11:42:00.793: ERROR/System(585): at android.os.ServiceManagerProxy.addService(ServiceManagerNative.java:146) 03-20 11:42:00.793: ERROR/System(585): at android.os.ServiceManager.addService(ServiceManager.java:72) 03-20 11:42:00.793: ERROR/System(585): at com.android.server.ServerThread.run(SystemServer.java:163) 03-20 11:42:00.823: ERROR/AndroidRuntime(585): Crash logging skipped, no checkin service 03-20 11:42:02.993: ERROR/LockPatternKeyguardView(585): Failed to bind to GLS while checking for account 03-20 11:42:10.204: ERROR/ApplicationContext(585): Couldn't create directory for SharedPreferences file shared_prefs/wallpaper-hints.xml 03-20 11:42:11.539: ERROR/ActivityThread(624): Failed to find provider info for android.server.checkin 03-20 11:42:12.889: ERROR/ActivityThread(624): Failed to find provider info for android.server.checkin 03-20 11:42:13.048: ERROR/ActivityThread(624): Failed to find provider info for android.server.checkin

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

  • Merge replication server side foreign key violation from unpublished table

    - by Reiste
    We are using SQL Server 2005 Merge Replication with SQL CE 3.5 clients. We are using partitions with filtering for the separate subscriptions, and nHibernate for the ORM mapping. There is automatic ID range management from SQL Server for the subscriptions. We have a table, Item, and a table with a foreign key to Item - ItemHistory. Both of these are replicated down, filtered according to the subscription. Item has a column called UserId, and is filtered per subscription with this filter: WHERE UserId IN (SELECT... [complicated subselect]...) ItemHistory hangs off Item in the publication filter articles. On the server, we have a table ItemHistoryExport, which has a foreign key to ItemHistory. ItemHistoryExport is not published. Entries in the Item and ItemHistory tables are never deleted, on the server or the client. However, the "ownership" of items (and hence their ItemHistories) MAY change, which causes them to be moved from one client subscription/partition to another from time to time. When we sync, we occasionally get the following error: A row delete at '48269404 - 4108383dbb11' could not be propagated to 'MyServer\MyInstance.MyDatabase'. This failure can be caused by a constraint violation. The DELETE statement conflicted with the REFERENCE constraint "FK_ItemHistoryExport_ItemHistory". The conflict occurred in database "MyDatabase", table "dbo.ItemHistoryExport", column 'ItemHistoryId'. Can anyone help us understand why this happens? There shouldn't ever be a delete happening on the server side.

    Read the article

  • Problem with stack based implementation of function 0x42 of int 0x13

    - by IceCoder
    I'm trying a new approach to int 0x13 (just to learn more about the way the system works): using stack to create a DAP.. Assuming that DL contains the disk number, AX contains the address of the bootable entry in PT, DS is updated to the right segment and the stack is correctly set, this is the code: push DWORD 0x00000000 add ax, 0x0008 mov si, ax push DWORD [ds:(si)] push DWORD 0x00007c00 push WORD 0x0001 push WORD 0x0010 push ss pop ds mov si, sp mov sp, bp mov ah, 0x42 int 0x13 As you can see: I push the dap structure onto the stack, update DS:SI in order to point to it, DL is already set, then set AX to 0x42 and call int 0x13 the result is error 0x01 in AH and obviously CF set. No sectors are transferred. I checked the stack trace endlessly and it is ok, the partition table is ok too.. I cannot figure out what I'm missing... This is the stack trace portion of the disk address packet: 0x000079ea: 10 00 adc %al,(%bx,%si) 0x000079ec: 01 00 add %ax,(%bx,%si) 0x000079ee: 00 7c 00 add %bh,0x0(%si) 0x000079f1: 00 00 add %al,(%bx,%si) 0x000079f3: 08 00 or %al,(%bx,%si) 0x000079f5: 00 00 add %al,(%bx,%si) 0x000079f7: 00 00 add %al,(%bx,%si) 0x000079f9: 00 a0 07 be add %ah,-0x41f9(%bx,%si) I'm using qemu latest version and trying to read from hard drive (0x80), have also tried with a 4bytes alignment for the structure with the same result (CF 1 AH 0x01), the extensions are present.

    Read the article

  • Why are my attempts to open a file using open for writing failing? Ada 95

    - by mat_geek
    When I attempt to open a file to write to I get an Ada.IO_Exceptions.Name_Error. The procedure call is Ada.Text_IO.Open The file name is "C:\CC_TEST_LOG.TXT". This file does not exist. This is on Windows XP on an NTFS partition. The user has permissions to create and write to the directory. The filename is well under the WIN32 max path length. name_2 : String := "C:\CC_TEST_LOG.TXT" if name_2'last > name_2'first then begin Ada.Text_IO.Open(file, Ada.Text_IO.Out_File, name_2); Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open, File " & name_2); return; exception when The_Error : others => Ada.Text_IO.Put_Line( "CC_Test_Utils: LogFile: ERROR: Open Failed; " & Ada.Exceptions.Exception_Name(The_Error) & ", File " & name_2); end; end if;

    Read the article

  • C compiler cannot create executables when trying to build Binutils

    - by Koning Baard XIV
    I am trying to build Linux From Scratch, and now I am at chapter 5.4, which tells me how to build Binutils. I have binutils 2.20's source code, but when I try to build it: time { ./binutils-2.20/configure --target=$LFS_TGT --prefix=/tools --disable-nls --disable-werror ; } it gives me an error: checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-lfs-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for a sed that does not truncate output... /bin/sed checking for gawk... gawk checking for gcc... GCC checking for C compiler default output file name... configure: error: in `/media/LFS': configure: error: C compiler cannot create executables See `config.log' for more details. You can see my config.log at pastebin.com: http://pastebin.com/hX7v5KLn I have just installed Ubuntu 10.04, and reinstalled GCC and installed G++. Also, the build is done by a non-root, non-admin user called 'lfs' (which is also described in Linux From Scratch), and on a different partition than where the system is installed. Can anyone help me? Thanks

    Read the article

  • On Windows 7: Same path but Explorer & Java see different files than Powershell

    - by Ukko
    Submitted for your approval, a story about a poor little java process trapped in the twilight zone... Before I throw up my hands and just say that the NTFS partition is borked is there any rational explanation of what I am seeing. I have a file with a path like this C:\Program Files\Company\product\config\file.xml I am reading this file after an upgrade and seeing something wonky. Eclipse and my Java app are still seeing the old version of this file while some other programs see the new version. The test that convinced my it was not my fat finger that caused the problem was this: In Explorer I entered the above path and Explorer displayed the old version of the file. Forcing Explorer to reload via Ctrl-F5 still yields the old version. This is the behavior I get in Java. Now in PowerShell I enter more "C:\Program Files\Company\product\config\file.xml" I cut and past the path from Explorer to make sure I am not screwing anything up and it shows me the new version of the file. So for the programming aspect of this, is there a cache or some system component that would be storing this stale reference. Am I responsible for checking or reseting that for some class of files. I can imagine somebody being "creative" in how xml files are processed to provide some bell or whistle. But it could be a case of just being borked. Any insights appreciated...Thanks!

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >