Search Results

Search found 5520 results on 221 pages for 'compound primary'.

Page 52/221 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Ubuntu 12.04 still slow at mounting internal filesystem

    - by Matthew Goson
    I'm using Toshiba laptop with this configuration: - CPU: Core i5, 2.4GHz - RAM: 4GB - Graphics card: Intel - Hard disk: 500GB SATA I installed Ubuntu 12.04 64bit and got the same issue with this guy Very slow boot due to mounting filesytem, I tried all suggestions there but the slow boot issue still here. Here's a part of my dmesg: [ 2.041015] usbhid: USB HID core driver [ 2.101378] usb 1-1.6: new full-speed USB device number 5 using ehci_hcd [ 2.137980] atl1c 0000:04:00.0: version 1.0.1.0-NAPI [ 2.779080] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 22.822597] udevd[381]: starting version 175 [ 22.837954] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 22.850837] lp: driver loaded but no devices found [ 23.003822] Adding 7079096k swap on /dev/sda7. Priority:-1 extents:1 across:7079096k [ 23.407915] mei: module is from the staging directory, the quality is unknown, you have been warned. [ 23.408153] mei 0000:00:16.0: PCI->APIC IRQ transform: INT A -> IRQ 16 [ 23.408160] mei 0000:00:16.0: setting latency timer to 64 [ 23.408211] mei 0000:00:16.0: irq 44 for MSI/MSI-X [ 23.433196] [drm] Initialized drm 1.1.0 20060810 Additional information: my sda1 is a primary NTFS partition, sda2 is a primary ext4 partition which I installed Ubuntu onto. Other partitions are inside an extended partition.

    Read the article

  • Deal Registration Moves to Oracle Partner Store (OPS)- The Four Action Items for Partners

    - by Richard Lefebvre
    In November 2012, Oracle’s partner deal registration process will move to the Oracle Partner Store (OPS). During this time, OPS will become the single source for partners to register deals, obtain deal status, and place orders. What will partners need to do? 1. Request an OPS Account – If your company is new to OPS the first thing you need to do is request an account (if your company already has an OPS account, go to step 2). It’s important to have the person who will be managing your OPS account make this request as soon as possible. They will be set up as your company’s primary administrator. 2. Set-Up Users in OPS – Setup of users can start immediately, and will be handled by the primary OPS administrator at your company. The process is simple, but all existing users of Global PRM (Partner Relationship Management) deal registration will need to be set up in OPS before November 14, 2012.  3. Review/Action Any Registrations Pending Submission in PRM – Prior to November 14, 2012, all pending registrations should be submitted in the existing PRM system. It is important that this step is complete so registrations will not need to be re-entered when the system is moved to OPS on November 17, 2012. Registrations pending submission are easily identified on the registration listing screen with either “Incomplete” or “Returned to Partner” in the status column.  4. Attend Training – Oracle will offer multiple VAD and VAR training sessions beginning October 29, 2012. It is recommended that all users attend one of these important sessions.  Detailed instructions on each of these tasks can be found on the OPS Information Page. OPS will offer several enhancements to the deal registration process, including: Simplified Registration Form Easier Product Selection Expanded Browser Support Shared Registration Visibility Between VAD and VAR Pre-set Customer Selection From Partner Ordering Base Best Regards, Titina Ott Vice President, Worldwide A&C Systems And Business Processes 

    Read the article

  • Impact of Server Failure on Coherence Request Processing

    - by jpurdy
    Requests against a given cache server may be temporarily blocked for several seconds following the failure of other cluster members. This may cause issues for applications that can not tolerate multi-second response times even during failover processing (ignoring for the moment that in practice there are a variety of issues that make such absolute guarantees challenging even when there are no server failures). In general, Coherence is designed around the principle that failures in one member should not affect the rest of the cluster if at all possible. However, it's obvious that if that failed member was managing a piece of state that another member depends on, the second member will need to wait until a new member assumes responsibility for managing that state. This transfer of responsibility is (as of Coherence 3.7) performed by the primary service thread for each cache service. The finest possible granularity for transferring responsibility is a single partition. So the question becomes how to minimize the time spent processing each partition. Here are some optimizations that may reduce this period: Reduce the size of each partition (by increasing the partition count) Increase the number of JVMs across the cluster (increasing the total number of primary service threads) Increase the number of CPUs across the cluster (making sure that each JVM has a CPU core when needed) Re-evaluate the set of configured indexes (as these will need to be rebuilt when a partition moves) Make sure that the backing map is as fast as possible (in most cases this means running on-heap) Make sure that the cluster is running on hardware with fast CPU cores (since the partition processing is single-threaded) As always, proper testing is required to make sure that configuration changes have the desired effect (and also to quantify that effect).

    Read the article

  • Installation on SSD with Windows preinstalled

    - by ebbot
    I bought a laptop with this fancy SSD drive, fancy new UEFI aso. I figured at first Windows out Ubuntu in but after doing 3 DoA on 3 laptops in one day I realized that maybe keeping Windows could come in handy. So dual boot it is. And this is what I've got: Disk 1 - 500 Gb HD 300 Mb Windoze only says "Healthy" don't know what it's for. 600 Mb "Healthy (EFI partition)" 186.30 Gb NTFS "OS (C:)" "Healthy (Boot, Page File, Crash Dump, Primary Partition)" 258.45 Gb NTFS "Data (D:)" "Healthy" 20.00 Gb "Healthy (Recovery Partition)" Disk 2 - 24 Gb SSD 4.00 Gb "Healthy (OEM Partition)" 18.36 Gb "Healthy (Primary Partition)" So I'm not sure what the first partition on each drive does (the 300 Gb on the HD and the OEM Partition on the SSD. Nor do I know what Data (D:). I think the 2nd partition on the SSD is for some speedup of Windoze. I'm debating if I should shrink the OS (C:) drive to around 120 GB or so. Clear the Data (D:) and also use the whole SSD for Ubuntu. That would leave me 24 Gb for e.g. / on the SSD and some 320 Gb on the HD for /home and swap. Is this a reasonable setup? Do I need to configure fstab for the SSD differently to a HD?

    Read the article

  • Step by Step: Remove/Sideline Preinstalled Windows 8, Install Ubuntu

    - by user207562
    I want Ubuntu as my primary/only operating system. Computer came with presumptive Pre-Installed Windows 8.(touch screen) Help. New Dell Inspiron 15r. Cannot install Ubuntu 12.04-03 or LTS. I need/request step by step instructions to remove Windows 8 or sideline Windows 8. (ie) BIOS settings I need to know: What to have; What settings; When to apply; Boot manager settings. UEFI. Etc. This should be easy, but I am mired in the Herpes that is Microsoft. I end up having a presumed dual boot that will not access Ubuntu. (ie) Step 1: Turn on computer. Step 2: F# to change ... to ... at Dell prompt... I want to use Ubuntu as my primary operating system or my only operating system.

    Read the article

  • Quit job for another but current employer doesn't want to lose me. Would it be a bad idea to stay?

    - by Confused
    So I've handed in my notice at my current job as I've been offered a job at another company. However, my current employer doesn't want to lose me and they want to know what I want to stay. I mostly enjoy working there so I'd be open to negiotiation. The new job was an unexpected opportunity that presented itself. Such things I'd be looking for are: Better computers for developers Opportunity to work from home occasionally Improved internet access (e.g. able to download software, no keyword blocking) Chance to work on other technologies than my primary (we do have projects on other technologies) Pay increase (though this isn't my primary motivation) I found out that some of these were already in progress when I handed in my notice :( Is it ever a good idea to remain at a company after you've resigned? What if they meet all my conditions and alter my contract accordingly? Will I burn my bridges at the new company (I've already told them I'd accept their offer)? Update: Thanks for the answers. Quite a mixed bag which was interesting. Anyway, just so you know, I've chosen to stay at my current company. So far, it definately feels like the right decision. Guess I won't know for a few months whether is was though.

    Read the article

  • Poor performance after reinstalling to a USB drive

    - by anonymous
    I am currently running Ubuntu 11.10 off of a SanDisk 16GB USB. I installed it using a Live USB with the following partition configuration: 6GB Primary /dos FAT32 5GB Logical / ext4 5GB Logical /home ext4 I don't have a hard disk, and don't see myself getting one anytime soon. I rely solely on this 16GB, and two other 4GB USBs, one of which I used as the LiveUSB. I bring the USBs around, and even use the install at work. I previously used an install that used a swap file. It functioned fine for the most part, save for a few slow moments, but I came across this post, and it got me thinking about my USB's life, so I reinstalled with the current config. My problem now is that it is slower. Applications like Firefox would hang more often. In my previous setup (the automatically partitioned setup), Firefox would start hanging if I was running an unzip or install task on the same partition as /. Now however, it would hang if I had another window open i.e. the system settings window. My guess is that it may have something to do with the swap file or the install being on a Logical partition rather than a Primary partition, but I don't know. Any insight as to why it has slowed down?

    Read the article

  • Video quality too bad while playing (any) videos in Intel GM965/GL960 Integrated Graphics Controller Ubuntu 12.04

    - by Sukhdev
    I have searched blogs and forums, installed several drivers, but can't find a solution that can provide equivalent video quality as that of Windows 7. Kindly help. Video quality specially color is too bad while playing with any media player. Configuration details are: Ubuntu - 12.04 Intel Corporation Mobile GM965/GL960 Integrated The results of the following commands are a) sudo lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) b) find /dev -group video /dev/fb0 /dev/dri/card0 /dev/dri/controlD64 /dev/agpgart c) glxinfo | grep -i vendor server glx vendor string: SGI client glx vendor string: ATI OpenGL vendor string: Tungsten Graphics, Inc d) sudo lshw -C video *-display:0 description: VGA compatible controller product: Mobile GM965/GL960 Integrated Graphics Controller (primary) vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 0c width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:fea00000-feafffff memory:e0000000-efffffff ioport:efe8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile GM965/GL960 Integrated Graphics Controller (secondary) vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 0c width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:feb00000-febfffff I have spent days installing various drivers, and then un-installing but can't come up with a solution. Please help.

    Read the article

  • ??GoldenGate Replicat?HANDLECOLLISIONS??

    - by Liu Maclean(???)
    HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????(????error mapping????,???????discard??),??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ??1 target??delete??(missing delete) : C:\Users\ML>sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 18 13:38:03 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> conn sender/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> insert into handlec values(3,2); 1 row created. SQL> insert into handlec values(4,2); 1 row created. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 3 2 4 2 target : SQL> conn receiver/oracle Connected. SQL> create table handlec(t1 int primary key,t2 int); Table created. SQL> insert into handlec values(1,2); 1 row created. SQL> commit; SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 SQL> GGSCI (XIANGBLI-CN) 1> alter extract load2 , begin now EXTRACT altered. GGSCI (XIANGBLI-CN) 4> alter replicat rep2, begin now REPLICAT altered. GGSCI (XIANGBLI-CN) 13> add trandata sender.* Logging of supplemental redo data enabled for table SENDER.HANDLEC. Logging of supplemental redo log data is already enabled for table SENDER.TV. GGSCI (XIANGBLI-CN) 14> start mgr MGR is already running. GGSCI (XIANGBLI-CN) 15> start er * Sending START request to MANAGER ... EXTRACT LOAD2 starting Sending START request to MANAGER ... REPLICAT REP2 starting GGSCI (XIANGBLI-CN) 16> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING LOAD2 00:00:00 00:00:01 REPLICAT RUNNING REP2 00:00:00 00:00:08 ***SOURCE?????TARGET????? SQL> delete handlec where t1=3; 1 row deleted. SQL> commit; Commit complete. ??SQL error 1403??,REPLICAT ABORT 2012-09-18 13:45:48 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL ). 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. 2012-09-18 13:45:48 WARNING OGG-01154 SQL error 1403 mapping SENDER.HANDLEC to RECEIVER.HANDLEC OCI Error ORA-01403: no data found, SQL . 2012-09-18 13:45:48 WARNING OGG-01003 Repositioning to rba 1091 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:45:48 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. *********************************************************************** * ** Run Time Statistics ** * *********************************************************************** Last record for the last committed transaction is the following: ___________________________________________________________________ Trail name : D:\ogg\V34342-01\ex\ze000003 Hdr-Ind : E (x45) Partition : . (x04) UndoFlag : . (x00) BeforeAfter: B (x42) RecLength : 9 (x0009) IO Time : 2012-09-18 13:45:38.000000 IOType : 3 (x03) OrigNode : 255 (xff) TransInd : . (x03) FormatType : R (x52) SyskeyLen : 0 (x00) Incomplete : . (x00) AuditRBA : 44 AuditPos : 3337232 Continued : N (x00) RecCount : 1 (x01) 2012-09-18 13:45:38.000000 Delete Len 9 RBA 1091 Name: SENDER.HANDLEC ___________________________________________________________________ Reading D:\ogg\V34342-01\ex\ze000003, current RBA 1091, 0 records Report at 2012-09-18 13:45:48 (activity since 2012-09-18 13:45:48) From Table SENDER.HANDLEC to RECEIVER.HANDLEC: # inserts: 0 # updates: 0 # deletes: 0 # discards: 1 Last log location read: FILE: D:\ogg\V34342-01\ex\ze000003 SEQNO: 3 RBA: 1091 TIMESTAMP: 2012-09-18 13:45:38.000000 EOF: NO READERR: 0 2012-09-18 13:45:48 ERROR OGG-01668 PROCESS ABENDING. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE1.TRC closed. 2012-09-18 13:45:48 INFO OGG-01237 Trace file D:\ogg\V34342-01\REP_TRACE2.TRC closed. CACHE OBJECT MANAGER statistics CACHE MANAGER VM USAGE vm current = 0 vm anon queues = 0 vm anon in use = 0 vm file = 0 vm used max = 0 ==> CACHE BALANCED CACHE CONFIGURATION cache size = 2G cache force paging = 3.41G buffer min = 64K buffer highwater = 8M pageout eligible size = 8M ================================================================================ ??skiptransaction???????? GGSCI (XIANGBLI-CN) 18> start rep2 skiptransaction Sending START request to MANAGER ... REPLICAT REP2 starting ??2 target??update??(missing update),???????? : ???????, ??source????????? SQL> update handlec set t1=5 where t1=4; 1 row updated. SQL> commit; Commit complete. ???target ????(miss update)??????? Database error 1403+OGG-01296 2012-09-18 13:49:30 WARNING OGG-01004 Aborted grouped transaction on 'RECEIVER.HANDLEC', Database error 1403 (OCI Error ORA-01403: no data found, SQL <UPDATE "RECEIVER"."HANDLEC" SET "T1" = :a1 WHERE "T1" = :b0>). 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. 2012-09-18 13:49:30 WARNING OGG-01003 Repositioning to rba 1218 in seqno 3. Source Context : SourceModule : [er.errors] SourceID : [er/errors.cpp] SourceFunction : [take_rep_err_action] SourceLine : [623] ThreadBacktrace : [8] elements : [D:\ogg\V34342-01\gglog.dll(??1CContextItem@@UEAA@XZ+0x3272) [0x000000018010BDD2]] : [D:\ogg\V34342-01\gglog.dll(?_MSG_ERR_MAP_TO_TANDEM_FAILED@@YAPEAVCMessage@@PEAVCSourceContext@@AEBV?$CQualDBObjName@$00@ggapp@gglib@ggs@@1W4MessageDisposition@CMessageFactory@@@Z+0x138) [0x00000001800AD508]] : [D:\ogg\V34342-01\replicat.exe(ERCALLBACK+0x6e1e) [0x0000000140099D5E]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x4411) [0x00000001400C9BE1]] : [D:\ogg\V34342-01\replicat.exe(shutdownMonitoring+0x289cd) [0x00000001400EE19D]] : [D:\ogg\V34342-01\replicat.exe(CommonLexerNewSSD+0x9440) [0x00000001402AE980]] : [C:\windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007733652D]] : [C:\windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007746C521]] 2012-09-18 13:49:30 ERROR OGG-01296 Error mapping from SENDER.HANDLEC to RECEIVER.HANDLEC. ??HANDLECOLLISIONS?,rep??????????discard?? GGSCI (XIANGBLI-CN) 23> view params rep2 replicat rep2 userid receiver , password oracle trace ./rep_trace1.trc trace2 ./rep_trace2.trc ASSUMETARGETDEFS HANDLECOLLISIONS map sender.*, target receiver.*; GGSCI (XIANGBLI-CN) 18> start rep2 SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 5 ????T1=5 T2 NULL?????? ,??update?????????????,??replicat??????????????update????????????????,?????T2 ?NULL ,????????????EXTRACT??PKUPDATE??? ????????FETCHOPTIONS FETCHPKUPDATECOLS ????????EXTRACT?????,???EXTRACT? ????extract???????????? ??????: SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 20 200 SQL> delete handlec where t1=5; 1 row deleted. SQL> commit; Commit complete. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 SQL> conn sender/oracle Connected. SQL> update handlec set t1=t1+1000 where t1=5; 1 row updated. SQL> commit; Commit complete. SQL> conn receiver/oracle Connected. SQL> SQL> SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 20 200 1005 2 ???????FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??3 ????????????target??,???replicat???UPDATE??????????????: *** TARGET SQL> conn receiver/oracle Connected. SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 9 5 target????? t1=10 t2=9??? ,????source???(10,100)??? >>SOURCE SQL> insert into handlec values(10,100); 1 row created. SQL> commit; >>TARGET SQL> select * from handlec; T1 T2 ---------- ---------- 1 2 10 100 5 ???????source?insert??,???target???????????????HANDLECOLLISIONS?REPLICAT???UPDATE??????COLUMNS ?? HANDLECOLLISIONS?????goldengate????????REPLICAT??,???????????????????,???????????????????????????,??????????????????????????reperror????????discard??,????????????????,??????,??????????????;?????????????????,????????? ??HANDLECOLLISIONS?????: target??delete??(missing delete),??????????discardfile target??update??(missing update) ????????=» update???INSERT ,???????????? ?????????=» ??????????discardfile ????????????target??,???replicat???UPDATE?????????????? ?:???????????Insert/Delete??,????????????????Replicat?????abend,????? ???????????,??target??HANDLECOLLISIONS??update??,?????INSERT??????,???????????????,FETCHOPTIONS FETCHPKUPDATECOLS??????redo image???trail?,????primary key?????HANDLECOLLISIONS????target??????????? ??????send ??????HANDLECOLLISIONS GGSCI (XIANGBLI-CN) 29> send rep2, NOHANDLECOLLISIONS Sending NOHANDLECOLLISIONS request to REPLICAT REP2 ... REP2 NOHANDLECOLLISIONS set for 1 tables and 0 wildcard entries

    Read the article

  • Linq-to-XML query to select specific sub-element based on additional criteria

    - by BrianLy
    My current LINQ query and example XML are below. What I'd like to do is select the primary email address from the email-addresses element into the User.Email property. The type element under the email-address element is set to primary when this is true. There may be more than one element under the email-addresses but only one will be marked primary. What is the simplest approach to take here? Current Linq Query (User.Email is currently empty): var users = from response in xdoc.Descendants("response") where response.Element("id") != null select new User { Id = (string)response.Element("id"), Name = (string)response.Element("full-name"), Email = (string)response.Element("email-addresses"), JobTitle = (string)response.Element("job-title"), NetworkId = (string)response.Element("network-id"), Type = (string)response.Element("type") }; Example XML: <?xml version="1.0" encoding="UTF-8"?> <response> <response> <contact> <phone-numbers/> <im> <provider></provider> <username></username> </im> <email-addresses> <email-address> <type>primary</type> <address>[email protected]</address> </email-address> </email-addresses> </contact> <job-title>Account Manager</job-title> <type>user</type> <expertise nil="true"></expertise> <summary nil="true"></summary> <kids-names nil="true"></kids-names> <location nil="true"></location> <guid nil="true"></guid> <timezone>Eastern Time (US &amp; Canada)</timezone> <network-name>Domain</network-name> <full-name>Alice</full-name> <network-id>79629</network-id> <stats> <followers>2</followers> <updates>4</updates> <following>3</following> </stats> <mugshot-url> https://assets3.yammer.com/images/no_photo_small.gif</mugshot-url> <previous-companies/> <birth-date></birth-date> <name>alice</name> <web-url>https://www.yammer.com/domain.com/users/alice</web-url> <interests nil="true"></interests> <state>active</state> <external-urls/> <url>https://www.yammer.com/api/v1/users/1089943</url> <network-domains> <network-domain>domain.com</network-domain> </network-domains> <id>1089943</id> <schools/> <hire-date nil="true"></hire-date> <significant-other nil="true"></significant-other> </response> <response> <contact> <phone-numbers/> <im> <provider></provider> <username></username> </im> <email-addresses> <email-address> <type>primary</type> <address>[email protected]</address> </email-address> </email-addresses> </contact> <job-title>Office Manager</job-title> <type>user</type> <expertise nil="true"></expertise> <summary nil="true"></summary> <kids-names nil="true"></kids-names> <location nil="true"></location> <guid nil="true"></guid> <timezone>Eastern Time (US &amp; Canada)</timezone> <network-name>Domain</network-name> <full-name>Bill</full-name> <network-id>79629</network-id> <stats> <followers>3</followers> <updates>1</updates> <following>1</following> </stats> <mugshot-url> https://assets3.yammer.com/images/no_photo_small.gif</mugshot-url> <previous-companies/> <birth-date></birth-date> <name>bill</name> <web-url>https://www.yammer.com/domain.com/users/bill</web-url> <interests nil="true"></interests> <state>active</state> <external-urls/> <url>https://www.yammer.com/api/v1/users/1089920</url> <network-domains> <network-domain>domain.com</network-domain> </network-domains> <id>1089920</id> <schools/> <hire-date nil="true"></hire-date> <significant-other nil="true"></significant-other> </response> </response>

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

    - by pinaldave
    Let us jump right away with question and answer mode. What is resource governor? Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. Why is resource governor required? If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. What will be the real world example of need of resource governor? Here are two simple scenarios where the resource governor can be very useful. Scenario 1: A server which is running OLTP workload and various resource intensive reports on the same server. The ideal situation is where there are two servers which are data synced with each other and one server runs OLTP transactions and the second server runs all the resource intensive reports. However, not everybody has the luxury to set up this kind of environment. In case of the situation where reports and OLTP transactions are running on the same server, limiting the resources to the reporting workload it can be ensured that OTLP’s critical transaction is not throttled. Scenario 2: There are two DBAs in one organization. One DBA A runs critical queries for business and another DBA B is doing maintenance of the database. At any point in time the DBA A’s work should not be affected but at the same time DBA B should be allowed to work as well. The ideal situation is that when DBA B starts working he get some resources but he can’t get more than defined resources. Does SQL Server have any default resource governor component? Yes, SQL Server have two by default created resource governor component. 1) Internal –This is used by database engine exclusives and user have no control. 2) Default – This is used by all the workloads which are not assigned to any other group. What are the major components of the resource governor? Resource Pools Workload Groups Classification In simple words here is what the process of resource governor is. Create resource pool Create a workload group Create classification function based on the criteria specified Enable Resource Governor with classification function Let me further explain you the same with graphical image. Is it possible to configure resource governor with T-SQL? Yes, here is the code for it with explanation in between. Step 0: Here we are assuming that there are separate login accounts for Reporting server and OLTP server. /*----------------------------------------------- Step 0: (Optional and for Demo Purpose) Create Two User Logins 1) ReportUser, 2) PrimaryUser Use ReportUser login for Reports workload Use PrimaryUser login for OLTP workload -----------------------------------------------*/ Step 1: Creating Resource Pool We are creating two resource pools. 1) Report Server and 2) Primary OLTP Server. We are giving only a few resources to the Report Server Pool as described in the scenario 1 the other server is mission critical and not the report server. ----------------------------------------------- -- Step 1: Create Resource Pool ----------------------------------------------- -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO -- Creating Resource Pool for OLTP Primary Server CREATE RESOURCE POOL PrimaryServerPool WITH ( MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100, MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100) GO Step 2: Creating Workload Group We are creating two workloads each mapping to each of the resource pool which we have just created. ----------------------------------------------- -- Step 2: Create Workload Group ----------------------------------------------- -- Creating Workload Group for Report Server CREATE WORKLOAD GROUP ReportServerGroup USING ReportServerPool ; GO -- Creating Workload Group for OLTP Primary Server CREATE WORKLOAD GROUP PrimaryServerGroup USING PrimaryServerPool ; GO Step 3: Creating user defiled function which routes the workload to the appropriate workload group. In this example we are checking SUSER_NAME() and making the decision of Workgroup selection. We can use other functions such as HOST_NAME(), APP_NAME(), IS_MEMBER() etc. ----------------------------------------------- -- Step 3: Create UDF to Route Workload Group ----------------------------------------------- CREATE FUNCTION dbo.UDFClassifier() RETURNS SYSNAME WITH SCHEMABINDING AS BEGIN DECLARE @WorkloadGroup AS SYSNAME IF(SUSER_NAME() = 'ReportUser') SET @WorkloadGroup = 'ReportServerGroup' ELSE IF (SUSER_NAME() = 'PrimaryServerPool') SET @WorkloadGroup = 'PrimaryServerGroup' ELSE SET @WorkloadGroup = 'default' RETURN @WorkloadGroup END GO Step 4: In this final step we enable the resource governor with the classifier function created in earlier step 3. ----------------------------------------------- -- Step 4: Enable Resource Governer -- with UDFClassifier ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.UDFClassifier); GO ALTER RESOURCE GOVERNOR RECONFIGURE GO Step 5: If you are following this demo and want to clean up your example, you should run following script. Running them will disable your resource governor as well delete all the objects created so far. ----------------------------------------------- -- Step 5: Clean Up -- Run only if you want to clean up everything ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO DROP WORKLOAD GROUP ReportServerGroup GO DROP WORKLOAD GROUP PrimaryServerGroup GO DROP RESOURCE POOL ReportServerPool GO DROP RESOURCE POOL PrimaryServerPool GO ALTER RESOURCE GOVERNOR RECONFIGURE GO I hope this introductory example give enough light on the subject of Resource Governor. In future posts we will take this same example and learn a few more details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Resource Governor

    Read the article

  • How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet

    - by mv
    How to export ECC key and Cert from NSS DB and import into JKS keystore and Oracle Wallet In this blog I will write about how to extract a cert and key from NSS Db and import it to a JKS Keystore and then import that JKS Keystore into Oracle Wallet. 1. Set Java Home I pointed it to JRE 1.6.0_22 $ export JAVA_HOME=/usr/java/jre1.6.0_22/ 2. Create a self signed ECC cert in NSS DB I created NSS DB with self signed ECC certificate. If you already have NSS Db with ECC cert (and key) skip this step. $export NSS_DIR=/export/home/nss/ $$NSS_DIR/certutil -N -d . $$NSS_DIR/certutil -S -x -s "CN=test,C=US" -t "C,C,C" -n ecc-cert -k ec -q nistp192 -d . 3. Export ECC cert and key using pk12util Use NSS tool pk12util to export this cert and key into a p12 file      $$NSS_DIR/pk12util -o ecc-cert.p12 -n ecc-cert -d . -W password 4. Use keytool to create JKS keystore and import this p12 file 4.1 Import p12 file created above into a JKS keystore $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v But if an error as shown is encountered, keytool error: java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available java.security.UnrecoverableKeyException: Get Key failed: EC KeyFactory not available        at com.sun.net.ssl.internal.pkcs12.PKCS12KeyStore.engineGetKey(Unknown Source)         at java.security.KeyStoreSpi.engineGetEntry(Unknown Source)         at java.security.KeyStore.getEntry(Unknown Source)         at sun.security.tools.KeyTool.recoverEntry(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStoreSingle(Unknown Source)         at sun.security.tools.KeyTool.doImportKeyStore(Unknown Source)         at sun.security.tools.KeyTool.doCommands(Unknown Source)         at sun.security.tools.KeyTool.run(Unknown Source)         at sun.security.tools.KeyTool.main(Unknown Source) Caused by: java.security.NoSuchAlgorithmException: EC KeyFactory not available         at java.security.KeyFactory.<init>(Unknown Source)         at java.security.KeyFactory.getInstance(Unknown Source)         ... 9 more 4.2 Create a new PKCS11 provider If you didn't get an error as shown above skip this step. Since we already have NSS libraries built with ECC, we can create a new PKCS11 provider Create ${java.home}/jre/lib/security/nss.cfg as follows: name = NSS     nssLibraryDirectory = ${nsslibdir}    nssDbMode = noDb    attributes = compatibility where nsslibdir should contain NSS libs with ECC support. Add the following line to ${java.home}/jre/lib/security/java.security :      security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg Note that those who are using Oracle iPlanet Web Server or Oracle Traffic Director, NSS libs built with ECC are in <ws_install_dir>/lib or <otd_install_dir>/lib. 4.3. Now keytool should work Now you can try the same keytool command and see that it succeeds : $JAVA_HOME/bin/keytool -importkeystore -srckeystore ecc-cert.p12 -srcstoretype PKCS12 -deststoretype JKS -destkeystore ecc.jks -srcstorepass password -deststorepass password -srcalias ecc-cert -destalias ecc-cert -srckeypass password -destkeypass password -v [Storing ecc.jks] 5. Convert JKS keystore into an Oracle Wallet You can export this cert and key from JKS keystore and import it into an Oracle Wallet if you need using orapki tool as shown below. Make sure that orapki you use supports ECC. Also for ECC you MUST use "-jsafe" option. $ orapki wallet create -pwd password  -wallet .  -jsafe $ orapki wallet jks_to_pkcs12 -wallet . -pwd password -keystore ecc.jks -jkspwd password -jsafe AS $orapki wallet display -wallet . -pwd welcome1  -jsafeOracle PKI Tool : Version 11.1.2.0.0Copyright (c) 2004, 2012, Oracle and/or its affiliates. All rights reserved.Requested Certificates:User Certificates:Subject:        CN=test,C=USTrusted Certificates:Subject:        OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=GTE CyberTrust Global Root,OU=GTE CyberTrust Solutions\, Inc.,O=GTE Corporation,C=USSubject:        OU=Class 2 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        OU=Class 1 Public Primary Certification Authority,O=VeriSign\, Inc.,C=USSubject:        CN=test,C=US As you can see our ECC cert in the wallet. You can follow the same steps for RSA certs as well. 6. References http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=356 http://old.nabble.com/-PATCH-FOR-REVIEW-%3A-Support-PKCS11-cryptography-via-NSS-p25282932.html http://www.mozilla.org/projects/security/pki/nss/tools/pk12util.html

    Read the article

  • Event Logging in LINQ C# .NET

    The first thing you'll want to do before using this code is to create a table in your database called TableHistory: CREATE TABLE [dbo].[TableHistory] (     [TableHistoryID] [int] IDENTITY NOT NULL ,     [TableName] [varchar] (50) NOT NULL ,     [Key1] [varchar] (50) NOT NULL ,     [Key2] [varchar] (50) NULL ,     [Key3] [varchar] (50) NULL ,     [Key4] [varchar] (50) NULL ,     [Key5] [varchar] (50) NULL ,     [Key6] [varchar] (50)NULL ,     [ActionType] [varchar] (50) NULL ,     [Property] [varchar] (50) NULL ,     [OldValue] [varchar] (8000) NULL ,     [NewValue] [varchar] (8000) NULL ,     [ActionUserName] [varchar] (50) NOT NULL ,     [ActionDateTime] [datetime] NOT NULL ) Once you have created the table, you'll need to add it to your custom LINQ class (which I will refer to as DboDataContext), thus creating the TableHistory class. Then, you'll need to add the History.cs file to your project. You'll also want to add the following code to your project to get the system date: public partial class DboDataContext{ [Function(Name = "GetDate", IsComposable = true)] public DateTime GetSystemDate() { MethodInfo mi = MethodBase.GetCurrentMethod() as MethodInfo; return (DateTime)this.ExecuteMethodCall(this, mi, new object[] { }).ReturnValue; }}private static Dictionary<type,> _cachedIL = new Dictionary<type,>();public static T CloneObjectWithIL<t>(T myObject){ Delegate myExec = null; if (!_cachedIL.TryGetValue(typeof(T), out myExec)) { // Create ILGenerator DynamicMethod dymMethod = new DynamicMethod("DoClone", typeof(T), new Type[] { typeof(T) }, true); ConstructorInfo cInfo = myObject.GetType().GetConstructor(new Type[] { }); ILGenerator generator = dymMethod.GetILGenerator(); LocalBuilder lbf = generator.DeclareLocal(typeof(T)); //lbf.SetLocalSymInfo("_temp"); generator.Emit(OpCodes.Newobj, cInfo); generator.Emit(OpCodes.Stloc_0); foreach (FieldInfo field in myObject.GetType().GetFields( System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)) { // Load the new object on the eval stack... (currently 1 item on eval stack) generator.Emit(OpCodes.Ldloc_0); // Load initial object (parameter) (currently 2 items on eval stack) generator.Emit(OpCodes.Ldarg_0); // Replace value by field value (still currently 2 items on eval stack) generator.Emit(OpCodes.Ldfld, field); // Store the value of the top on the eval stack into // the object underneath that value on the value stack. // (0 items on eval stack) generator.Emit(OpCodes.Stfld, field); } // Load new constructed obj on eval stack -> 1 item on stack generator.Emit(OpCodes.Ldloc_0); // Return constructed object. --> 0 items on stack generator.Emit(OpCodes.Ret); myExec = dymMethod.CreateDelegate(typeof(Func<t,>)); _cachedIL.Add(typeof(T), myExec); } return ((Func<t,>)myExec)(myObject);}I got both of the above methods off of the net somewhere (maybe even from CodeProject), but it's been long enough that I can't recall where I got them.Explanation of the History ClassThe History class records changes by creating a TableHistory record, inserting the values for the primary key for the table being modified into the Key1, Key2, ..., Key6 columns (if you have more than 6 values that make up a primary key on any table, you'll want to modify this), setting the type of change being made in the ActionType column (INSERT, UPDATE, or DELETE), old value and new value if it happens to be an update action, and the date and Windows identity of the user who made the change.Let's examine what happens when a call is made to the RecordLinqInsert method:public static void RecordLinqInsert(DboDataContext dbo, IIdentity user, object obj){ TableHistory hist = NewHistoryRecord(obj); hist.ActionType = "INSERT"; hist.ActionUserName = user.Name; hist.ActionDateTime = dbo.GetSystemDate(); dbo.TableHistories.InsertOnSubmit(hist);}private static TableHistory NewHistoryRecord(object obj){ TableHistory hist = new TableHistory(); Type type = obj.GetType(); PropertyInfo[] keys; if (historyRecordExceptions.ContainsKey(type)) { keys = historyRecordExceptions[type].ToArray(); } else { keys = type.GetProperties().Where(o => AttrIsPrimaryKey(o)).ToArray(); } if (keys.Length > KeyMax) throw new HistoryException("object has more than " + KeyMax.ToString() + " keys."); for (int i = 1; i <= keys.Length; i++) { typeof(TableHistory) .GetProperty("Key" + i.ToString()) .SetValue(hist, keys[i - 1].GetValue(obj, null).ToString(), null); } hist.TableName = type.Name; return hist;}protected static bool AttrIsPrimaryKey(PropertyInfo pi){ var attrs = from attr in pi.GetCustomAttributes(typeof(ColumnAttribute), true) where ((ColumnAttribute)attr).IsPrimaryKey select attr; if (attrs != null && attrs.Count() > 0) return true; else return false;}RecordLinqInsert takes as input a data context which it will use to write to the database, the user, and the LINQ object to be recorded (a single object, for instance, a Customer or Order object if you're using AdventureWorks). It then calls the NewHistoryRecord method, which uses LINQ to Objects in conjunction with the AttrIsPrimaryKey method to pull all the primary key properties, set the Key1-KeyN properties of the TableHistory object, and return the new TableHistory object. The code would be called in an application, like so: Continue span.fullpost {display:none;}

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • In 10.10, USB 3.0 PCI Express card recognized by lspci but not lsusb or dmesg. How to fix?

    - by Paul
    Asus N PC, runs 10.10 x86_64 The Asus N comes with 4 usb 2.0 ports, each labelled 2.0 on the case. Attempting to add two usb 3.0 ports to be provided by a generic usb 3.0 pci express card installed in the pci expres slot. The new card says usb 3.0 and has the blue ports. The card is installed into the laptop unpowered, then the laptop is powered on and boots normally. Nothing happens when a USB 3.0 flash drive is inserted into the usb 3.0 port. uname -a Linux drpaulbrewer-N90SV 2.6.35.8 #1 SMP Fri Jan 14 15:54:11 EST 2011 x86_64 GNU/Linux lspci -v 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64 Kernel modules: sis-agp 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fa000000-fdefffff Prefetchable memory behind bridge: 00000000d0000000-00000000dfffffff Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [f4] Power Management version 2 Capabilities: [70] Subsystem: Silicon Integrated Systems [SiS] PCI-to-PCI bridge Kernel driver in use: pcieport 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) Flags: bus master, medium devsel, latency 0 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) (prog-if 80 [Master]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 128 I/O ports at 01f0 [size=8] I/O ports at 03f4 [size=1] I/O ports at 0170 [size=8] I/O ports at 0374 [size=1] I/O ports at ffe0 [size=16] Capabilities: [58] Power Management version 2 Kernel driver in use: pata_sis 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10 [OHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 20 Memory at f9fff000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10 [OHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 21 Memory at f9ffe000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller (prog-if 20 [EHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 22 Memory at f9ffd000 (32-bit, non-prefetchable) [size=4K] Capabilities: [50] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 Ethernet controller: Silicon Integrated Systems [SiS] 191 Gigabit Ethernet Adapter (rev 02) Subsystem: ASUSTeK Computer Inc. Device 11f5 Flags: bus master, medium devsel, latency 0, IRQ 19 Memory at f9ffcc00 (32-bit, non-prefetchable) [size=128] I/O ports at cc00 [size=128] Capabilities: [40] Power Management version 2 Kernel driver in use: sis190 Kernel modules: sis190 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 17 I/O ports at c800 [size=8] I/O ports at c400 [size=4] I/O ports at c000 [size=8] I/O ports at bc00 [size=4] I/O ports at b800 [size=16] I/O ports at b400 [size=128] Capabilities: [58] Power Management version 2 Kernel driver in use: sata_sis Kernel modules: sata_sis 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 Memory behind bridge: fdf00000-fdffffff Capabilities: [b0] Subsystem: Silicon Integrated Systems [SiS] Device 0004 Capabilities: [c0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [f4] Power Management version 2 Kernel driver in use: pcieport 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=06, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: fe000000-febfffff Prefetchable memory behind bridge: 00000000f6000000-00000000f8ffffff Capabilities: [b0] Subsystem: Silicon Integrated Systems [SiS] Device 0004 Capabilities: [c0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [f4] Power Management version 2 Kernel driver in use: pcieport 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller Subsystem: ASUSTeK Computer Inc. Device 17b3 Flags: bus master, medium devsel, latency 0, IRQ 18 Memory at f9ff4000 (32-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce GT 130M] (rev a1) (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 2021 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fc000000 (32-bit, non-prefetchable) [size=16M] Memory at d0000000 (64-bit, prefetchable) [size=256M] Memory at fa000000 (64-bit, non-prefetchable) [size=32M] I/O ports at dc00 [size=128] [virtual] Expansion ROM at fde80000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Kernel driver in use: nvidia Kernel modules: nvidia-current, nouveau, nvidiafb 02:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Device 1a3b:1067 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fdff0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Kernel driver in use: ath9k Kernel modules: ath9k 03:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at febfe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 lsusb Bus 003 Device 002: ID 0b05:1751 ASUSTek Computer, Inc. BT-253 Bluetooth Adapter Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 0bda:0158 Realtek Semiconductor Corp. USB 2.0 multicard reader Bus 001 Device 002: ID 04f2:b071 Chicony Electronics Co., Ltd 2.0M UVC Webcam / CNF7129 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub dmesg trying to post dmesg exceeded the stackexchange posting limit of 30K... but nothing there is usb 3.0

    Read the article

  • ADF Reusable Artefacts

    - by Arda Eralp
    Primary reusable ADF Business Component: Entity Objects (EOs) View Objects (VOs) Application Modules (AMs) Framework Extensions Classes Primary reusable ADF Controller: Bounded Task Flows (BTFs) Task Flow Templates Primary reusable ADF Faces: Page Templates Skins Declarative Components Utility Classes Certain components will often be used more than once. Whether the reuse happens within the same application, or across different applications, it is often advantageous to package these reusable components into a library that can be shared between different developers, across different teams, and even across departments within an organization. In the world of Java object-oriented programming, reusing classes and objects is just standard procedure. With the introduction of the model-view-controller (MVC) architecture, applications can be further modularized into separate model, view, and controller layers. By separating the data (model and business services layers) from the presentation (view and controller layers), you ensure that changes to any one layer do not affect the integrity of the other layers. You can change business logic without having to change the UI, or redesign the web pages or front end without having to recode domain logic. Oracle ADF and JDeveloper support the MVC design pattern. When you create an application in JDeveloper, you can choose many application templates that automatically set up data model and user interface projects. Because the different MVC layers are decoupled from each other, development can proceed on different projects in parallel and with a certain amount of independence. ADF Library further extends this modularity of design by providing a convenient and practical way to create, deploy, and reuse high-level components. When you first design your application, you design it with component reusability in mind. If you created components that can be reused, you can package them into JAR files and add them to a reusable component repository. If you need a component, you may look into the repository for those components and then add them into your project or application. For example, you can create an application module for a domain and package it to be used as the data model project in several different applications. Or, if your application will be consuming components, you may be able to load a page template component from a repository of ADF Library JARs to create common look and feel pages. Then you can put your page flow together by stringing together several task flow components pulled from the library. An ADF Library JAR contains ADF components and does not, and cannot, contain other JARs. It should not be confused with the JDeveloper library, Java EE library, or Oracle WebLogic shared library. Reusable Component Description Data Control Any data control can be packaged into an ADF Library JAR. Some of the data controls supported by Oracle ADF include application modules, Enterprise JavaBeans, web services, URL services, JavaBeans, and placeholder data controls. Application Module When you are using ADF Business Components and you generate an application module, an associated application module data control is also generated. When you package an application module data control, you also package up the ADF Business Components associated with that application module. The relevant entity objects, view objects, and associations will be a part of the ADF Library JAR and available for reuse. Business Components Business components are the entity objects, view objects, and associations used in the ADF Business Components data model project. You can package business components by themselves or together with an application module. Task Flows & Task Flow Templates Task flows can be packaged into an ADF Library JAR for reuse. If you drop a bounded task flow that uses page fragments, JDeveloper adds a region to the page and binds it to the dropped task flow. ADF bounded task flows built using pages can be dropped onto pages. The drop will create a link to call the bounded task flow. A task flow call activity and control flow will automatically be added to the task flow, with the view activity referencing the page. If there is more than one existing task flow with a view activity referencing the page, it will prompt you to select the one to automatically add a task flow call activity and control flow. If an ADF task flow template was created in the same project as the task flow, the ADF task flow template will be included in the ADF Library JAR and will be reusable. Page Templates You can package a page template and its artifacts into an ADF Library JAR. If the template uses image files and they are included in a directory within your project, these files will also be available for the template during reuse. Declarative Components You can create declarative components and package them for reuse. The tag libraries associated with the component will be included and loaded into the consuming project. You can also package up projects that have several different reusable components if you expect that more than one component will be consumed. For example, you can create a project that has both an application module and a bounded task flow. When this ADF Library JAR file is consumed, the application will have both the application module and the task flow available for use. You can package multiple components into one JAR file, or you can package a single component into a JAR file. Oracle ADF and JDeveloper give you the option and flexibility to create reusable components that best suit you and your organization. You create a reusable component by using JDeveloper to package and deploy the project that contains the components into a ADF Library JAR file. You use the components by adding that JAR to the consuming project. At design time, the JAR is added to the consuming project's class path and so is available for reuse. At runtime, the reused component runs from the JAR file by reference.

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • Html.DropDownListFor() doesn't always render selected value

    - by Andrey
    I have a view model: public class LanguagesViewModel { public IEnumerable<LanguageItem> Languages { get; set; } public IEnumerable<SelectListItem> LanguageItems { get; set; } public IEnumerable<SelectListItem> LanguageLevelItems { get; set; } } public class LanguageItem { public int LanguageId { get; set; } public int SpeakingSkillId { get; set; } public int WritingSkillId { get; set; } public int UnderstandingSkillId { get; set; } public LanguagesViewModel Lvm { get; internal set; } } It's rendered with the following code: <tbody> <% foreach( var language in Model.Languages ) { Html.RenderPartial("LanguageItem", language); } %> </tbody> LanguageItem.ascx: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<HrmCV.Web.ViewModels.LanguageItem>" %> <tr id="lngRow"> <td class="a_left"> <%: Html.DropDownListFor(m => m.LanguageId, Model.Lvm.LanguageItems, null, new { @class = "span-3" }) %> </td> <td> <%: Html.DropDownListFor(m => m.SpeakingSkillId, Model.Lvm.LanguageLevelItems, null, new { @class = "span-3" }) %> </td> <td> <%: Html.DropDownListFor(m => m.WritingSkillId, Model.Lvm.LanguageLevelItems, null, new { @class = "span-3" })%> </td> <td> <%: Html.DropDownListFor(m => m.UnderstandingSkillId, Model.Lvm.LanguageLevelItems, null, new { @class = "span-3" })%> </td> <td> <div class="btn-primary"> <a class="btn-primary-l" onclick="DeleteLanguage(this.id)" id="btnDel" href="javascript:void(0)"><%:ViewResources.SharedStrings.BtnDelete%></a> <span class="btn-primary-r"></span> </div> </td> </tr> The problem is that upon POST, LanguageId dropdown doesn't render its previously selected value. While all other dropdowns - do render. I cannot see any differences in the implementation between them. What is the reason for such behavior?

    Read the article

  • jqGrid jQuery UI button wrapping in toolbar

    - by gurun8
    I have a jQuery UI Button that I'm placing in a jqGrid toolbar but the contents of the button are wrapping. I've tried to prevent the wrapping by using CSS white-space Property to no avail. Here's a snapshot of what's happening: Here are two code snippets of my attempt to fix the problem: $("#t_imageList").css("white-space", "nowrap").html('<button>Add</button>'); $("#t_imageList button").button({ icons: {primary: 'ui-icon-plus'}, text: true }); and/or $("#t_imageList button").css("white-space", "nowrap").button({ icons: {primary: 'ui-icon-plus'}, text: true }); Has someone experienced the same issue? If so, what was your solution?

    Read the article

  • WCF Data Service Exception

    - by Ravi
    Hi, I am working on C#.Net with ADO.NET Dataservice WCF Data Services. I try to update one record to relational table, when I reach context.SetLink() I am getting exception("The context is not currently tracking the entity"). I don't know how to solve this problem. My code is specified below. LogNote dbLogNote =logNote; LogSubSession dbLogSubSession = (from p in context.LogSubSession where p.UID == logNote.SubSessionId select p).First<LogSubSession>() as LogSubSession; context.AddToLogNote(dbLogNote); dbLogNote.LogSubSession = dbLogSubSession; context.SetLink(dbLogNote, "LogSubSession", dbLogSubSession); context.SaveChanges(); Here LogSubSession is a primary table and LogNote is a foreign table. I am updating data into foreign table based on primary key table. Thanks

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >