Search Results

Search found 4849 results on 194 pages for 'schema migration'.

Page 60/194 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • High Linux loads on low CPU/memory usage

    - by user13323
    Hi. I have quite strange situation, where my CentOS 5.5 box loads are high, but the CPU and memory used are pretty low: top - 20:41:38 up 42 days, 6:14, 2 users, load average: 19.79, 21.25, 18.87 Tasks: 254 total, 1 running, 253 sleeping, 0 stopped, 0 zombie Cpu(s): 3.8%us, 0.3%sy, 0.1%ni, 95.0%id, 0.6%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4035284k total, 4008084k used, 27200k free, 38748k buffers Swap: 4208928k total, 242576k used, 3966352k free, 1465008k cached free -mt total used free shared buffers cached Mem: 3940 3910 29 0 37 1427 -/+ buffers/cache: 2445 1495 Swap: 4110 236 3873 Total: 8050 4147 3903 Iostat also shows good results: avg-cpu: %user %nice %system %iowait %steal %idle 3.83 0.13 0.41 0.58 0.00 95.05 Here is the ps aux output: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 10348 80 ? Ss 2010 2:11 init [3] root 2 0.0 0.0 0 0 ? S< 2010 0:00 [migration/0] root 3 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/0] root 4 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/0] root 5 0.0 0.0 0 0 ? S< 2010 0:02 [migration/1] root 6 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/1] root 7 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/1] root 8 0.0 0.0 0 0 ? S< 2010 0:02 [migration/2] root 9 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/2] root 10 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/2] root 11 0.0 0.0 0 0 ? S< 2010 0:02 [migration/3] root 12 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/3] root 13 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/3] root 14 0.0 0.0 0 0 ? S< 2010 0:03 [migration/4] root 15 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/4] root 16 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/4] root 17 0.0 0.0 0 0 ? S< 2010 0:01 [migration/5] root 18 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/5] root 19 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/5] root 20 0.0 0.0 0 0 ? S< 2010 0:11 [migration/6] root 21 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/6] root 22 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/6] root 23 0.0 0.0 0 0 ? S< 2010 0:01 [migration/7] root 24 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/7] root 25 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/7] root 26 0.0 0.0 0 0 ? S< 2010 0:00 [migration/8] root 27 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/8] root 28 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/8] root 29 0.0 0.0 0 0 ? S< 2010 0:00 [migration/9] root 30 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/9] root 31 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/9] root 32 0.0 0.0 0 0 ? S< 2010 0:08 [migration/10] root 33 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/10] root 34 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/10] root 35 0.0 0.0 0 0 ? S< 2010 0:05 [migration/11] root 36 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/11] root 37 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/11] root 38 0.0 0.0 0 0 ? S< 2010 0:02 [migration/12] root 39 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/12] root 40 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/12] root 41 0.0 0.0 0 0 ? S< 2010 0:14 [migration/13] root 42 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/13] root 43 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/13] root 44 0.0 0.0 0 0 ? S< 2010 0:04 [migration/14] root 45 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/14] root 46 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/14] root 47 0.0 0.0 0 0 ? S< 2010 0:01 [migration/15] root 48 0.0 0.0 0 0 ? SN 2010 0:00 [ksoftirqd/15] root 49 0.0 0.0 0 0 ? S< 2010 0:00 [watchdog/15] root 50 0.0 0.0 0 0 ? S< 2010 0:00 [events/0] root 51 0.0 0.0 0 0 ? S< 2010 0:00 [events/1] root 52 0.0 0.0 0 0 ? S< 2010 0:00 [events/2] root 53 0.0 0.0 0 0 ? S< 2010 0:00 [events/3] root 54 0.0 0.0 0 0 ? S< 2010 0:00 [events/4] root 55 0.0 0.0 0 0 ? S< 2010 0:00 [events/5] root 56 0.0 0.0 0 0 ? S< 2010 0:00 [events/6] root 57 0.0 0.0 0 0 ? S< 2010 0:00 [events/7] root 58 0.0 0.0 0 0 ? S< 2010 0:00 [events/8] root 59 0.0 0.0 0 0 ? S< 2010 0:00 [events/9] root 60 0.0 0.0 0 0 ? S< 2010 0:00 [events/10] root 61 0.0 0.0 0 0 ? S< 2010 0:00 [events/11] root 62 0.0 0.0 0 0 ? S< 2010 0:00 [events/12] root 63 0.0 0.0 0 0 ? S< 2010 0:00 [events/13] root 64 0.0 0.0 0 0 ? S< 2010 0:00 [events/14] root 65 0.0 0.0 0 0 ? S< 2010 0:00 [events/15] root 66 0.0 0.0 0 0 ? S< 2010 0:00 [khelper] root 107 0.0 0.0 0 0 ? S< 2010 0:00 [kthread] root 126 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/0] root 127 0.0 0.0 0 0 ? S< 2010 0:03 [kblockd/1] root 128 0.0 0.0 0 0 ? S< 2010 0:01 [kblockd/2] root 129 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/3] root 130 0.0 0.0 0 0 ? S< 2010 0:05 [kblockd/4] root 131 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/5] root 132 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/6] root 133 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/7] root 134 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/8] root 135 0.0 0.0 0 0 ? S< 2010 0:02 [kblockd/9] root 136 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/10] root 137 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/11] root 138 0.0 0.0 0 0 ? S< 2010 0:04 [kblockd/12] root 139 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/13] root 140 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/14] root 141 0.0 0.0 0 0 ? S< 2010 0:00 [kblockd/15] root 142 0.0 0.0 0 0 ? S< 2010 0:00 [kacpid] root 281 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/0] root 282 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/1] root 283 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/2] root 284 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/3] root 285 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/4] root 286 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/5] root 287 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/6] root 288 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/7] root 289 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/8] root 290 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/9] root 291 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/10] root 292 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/11] root 293 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/12] root 294 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/13] root 295 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/14] root 296 0.0 0.0 0 0 ? S< 2010 0:00 [cqueue/15] root 299 0.0 0.0 0 0 ? S< 2010 0:00 [khubd] root 301 0.0 0.0 0 0 ? S< 2010 0:00 [kseriod] root 490 0.0 0.0 0 0 ? S 2010 0:00 [khungtaskd] root 493 0.1 0.0 0 0 ? S< 2010 94:48 [kswapd1] root 494 0.0 0.0 0 0 ? S< 2010 0:00 [aio/0] root 495 0.0 0.0 0 0 ? S< 2010 0:00 [aio/1] root 496 0.0 0.0 0 0 ? S< 2010 0:00 [aio/2] root 497 0.0 0.0 0 0 ? S< 2010 0:00 [aio/3] root 498 0.0 0.0 0 0 ? S< 2010 0:00 [aio/4] root 499 0.0 0.0 0 0 ? S< 2010 0:00 [aio/5] root 500 0.0 0.0 0 0 ? S< 2010 0:00 [aio/6] root 501 0.0 0.0 0 0 ? S< 2010 0:00 [aio/7] root 502 0.0 0.0 0 0 ? S< 2010 0:00 [aio/8] root 503 0.0 0.0 0 0 ? S< 2010 0:00 [aio/9] root 504 0.0 0.0 0 0 ? S< 2010 0:00 [aio/10] root 505 0.0 0.0 0 0 ? S< 2010 0:00 [aio/11] root 506 0.0 0.0 0 0 ? S< 2010 0:00 [aio/12] root 507 0.0 0.0 0 0 ? S< 2010 0:00 [aio/13] root 508 0.0 0.0 0 0 ? S< 2010 0:00 [aio/14] root 509 0.0 0.0 0 0 ? S< 2010 0:00 [aio/15] root 665 0.0 0.0 0 0 ? S< 2010 0:00 [kpsmoused] root 808 0.0 0.0 0 0 ? S< 2010 0:00 [ata/0] root 809 0.0 0.0 0 0 ? S< 2010 0:00 [ata/1] root 810 0.0 0.0 0 0 ? S< 2010 0:00 [ata/2] root 811 0.0 0.0 0 0 ? S< 2010 0:00 [ata/3] root 812 0.0 0.0 0 0 ? S< 2010 0:00 [ata/4] root 813 0.0 0.0 0 0 ? S< 2010 0:00 [ata/5] root 814 0.0 0.0 0 0 ? S< 2010 0:00 [ata/6] root 815 0.0 0.0 0 0 ? S< 2010 0:00 [ata/7] root 816 0.0 0.0 0 0 ? S< 2010 0:00 [ata/8] root 817 0.0 0.0 0 0 ? S< 2010 0:00 [ata/9] root 818 0.0 0.0 0 0 ? S< 2010 0:00 [ata/10] root 819 0.0 0.0 0 0 ? S< 2010 0:00 [ata/11] root 820 0.0 0.0 0 0 ? S< 2010 0:00 [ata/12] root 821 0.0 0.0 0 0 ? S< 2010 0:00 [ata/13] root 822 0.0 0.0 0 0 ? S< 2010 0:00 [ata/14] root 823 0.0 0.0 0 0 ? S< 2010 0:00 [ata/15] root 824 0.0 0.0 0 0 ? S< 2010 0:00 [ata_aux] root 842 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_0] root 843 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_1] root 844 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_2] root 845 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_3] root 846 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_4] root 847 0.0 0.0 0 0 ? S< 2010 0:00 [scsi_eh_5] root 882 0.0 0.0 0 0 ? S< 2010 0:00 [kstriped] root 951 0.0 0.0 0 0 ? S< 2010 4:24 [kjournald] root 976 0.0 0.0 0 0 ? S< 2010 0:00 [kauditd] postfix 990 0.0 0.0 54208 2284 ? S 21:19 0:00 pickup -l -t fifo -u root 1013 0.0 0.0 12676 8 ? S<s 2010 0:00 /sbin/udevd -d root 1326 0.0 0.0 90900 3400 ? Ss 14:53 0:00 sshd: root@notty root 1410 0.0 0.0 53972 2108 ? Ss 14:53 0:00 /usr/libexec/openssh/sftp-server root 2690 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/0] root 2691 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/1] root 2692 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/2] root 2693 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/3] root 2694 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/4] root 2695 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/5] root 2696 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/6] root 2697 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/7] root 2698 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/8] root 2699 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/9] root 2700 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/10] root 2701 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/11] root 2702 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/12] root 2703 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/13] root 2704 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/14] root 2705 0.0 0.0 0 0 ? S< 2010 0:00 [kmpathd/15] root 2706 0.0 0.0 0 0 ? S< 2010 0:00 [kmpath_handlerd] root 2755 0.0 0.0 0 0 ? S< 2010 4:35 [kjournald] root 2757 0.0 0.0 0 0 ? S< 2010 3:38 [kjournald] root 2759 0.0 0.0 0 0 ? S< 2010 4:10 [kjournald] root 2761 0.0 0.0 0 0 ? S< 2010 4:26 [kjournald] root 2763 0.0 0.0 0 0 ? S< 2010 3:15 [kjournald] root 2765 0.0 0.0 0 0 ? S< 2010 3:04 [kjournald] root 2767 0.0 0.0 0 0 ? S< 2010 3:02 [kjournald] root 2769 0.0 0.0 0 0 ? S< 2010 2:58 [kjournald] root 2771 0.0 0.0 0 0 ? S< 2010 0:00 [kjournald] root 3340 0.0 0.0 5908 356 ? Ss 2010 2:48 syslogd -m 0 root 3343 0.0 0.0 3804 212 ? Ss 2010 0:03 klogd -x root 3430 0.0 0.0 0 0 ? S< 2010 0:50 [kondemand/0] root 3431 0.0 0.0 0 0 ? S< 2010 0:54 [kondemand/1] root 3432 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/2] root 3433 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/3] root 3434 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/4] root 3435 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/5] root 3436 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/6] root 3437 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/7] root 3438 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/8] root 3439 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/9] root 3440 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/10] root 3441 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/11] root 3442 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/12] root 3443 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/13] root 3444 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/14] root 3445 0.0 0.0 0 0 ? S< 2010 0:00 [kondemand/15] root 3461 0.0 0.0 10760 284 ? Ss 2010 3:44 irqbalance rpc 3481 0.0 0.0 8052 4 ? Ss 2010 0:00 portmap root 3526 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/0] root 3527 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/1] root 3528 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/2] root 3529 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/3] root 3530 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/4] root 3531 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/5] root 3532 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/6] root 3533 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/7] root 3534 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/8] root 3535 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/9] root 3536 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/10] root 3537 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/11] root 3538 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/12] root 3539 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/13] root 3540 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/14] root 3541 0.0 0.0 0 0 ? S< 2010 0:00 [rpciod/15] root 3563 0.0 0.0 10160 8 ? Ss 2010 0:00 rpc.statd root 3595 0.0 0.0 55180 4 ? Ss 2010 0:00 rpc.idmapd dbus 3618 0.0 0.0 21256 28 ? Ss 2010 0:00 dbus-daemon --system root 3649 0.2 0.4 563084 18796 ? S<sl 2010 179:03 mfsmount /mnt/mfs -o rw,mfsmaster=web1.ovs.local root 3702 0.0 0.0 3800 8 ? Ss 2010 0:00 /usr/sbin/acpid 68 3715 0.0 0.0 31312 816 ? Ss 2010 3:14 hald root 3716 0.0 0.0 21692 28 ? S 2010 0:00 hald-runner 68 3726 0.0 0.0 12324 8 ? S 2010 0:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 68 3730 0.0 0.0 12324 8 ? S 2010 0:00 hald-addon-keyboard: listening on /dev/input/event0 root 3773 0.0 0.0 62608 332 ? Ss 2010 0:00 /usr/sbin/sshd ganglia 3786 0.0 0.0 24704 988 ? Ss 2010 14:26 /usr/sbin/gmond root 3843 0.0 0.0 54144 300 ? Ss 2010 1:49 /usr/libexec/postfix/master postfix 3855 0.0 0.0 54860 1060 ? S 2010 0:22 qmgr -l -t fifo -u root 3877 0.0 0.0 74828 708 ? Ss 2010 1:15 crond root 3891 1.4 1.9 326960 77704 ? S<l 2010 896:59 mfschunkserver root 4122 0.0 0.0 18732 176 ? Ss 2010 0:10 /usr/sbin/atd root 4193 0.0 0.8 129180 35984 ? Ssl 2010 11:04 /usr/bin/ruby /usr/sbin/puppetd root 4223 0.0 0.0 18416 172 ? S 2010 0:10 /usr/sbin/smartd -q never root 4227 0.0 0.0 3792 8 tty1 Ss+ 2010 0:00 /sbin/mingetty tty1 root 4230 0.0 0.0 3792 8 tty2 Ss+ 2010 0:00 /sbin/mingetty tty2 root 4231 0.0 0.0 3792 8 tty3 Ss+ 2010 0:00 /sbin/mingetty tty3 root 4233 0.0 0.0 3792 8 tty4 Ss+ 2010 0:00 /sbin/mingetty tty4 root 4234 0.0 0.0 3792 8 tty5 Ss+ 2010 0:00 /sbin/mingetty tty5 root 4236 0.0 0.0 3792 8 tty6 Ss+ 2010 0:00 /sbin/mingetty tty6 root 5596 0.0 0.0 19368 20 ? Ss 2010 0:00 DarwinStreamingServer qtss 5597 0.8 0.9 166572 37408 ? Sl 2010 523:02 DarwinStreamingServer root 8714 0.0 0.0 0 0 ? S Jan31 0:33 [pdflush] root 9914 0.0 0.0 65612 968 pts/1 R+ 21:49 0:00 ps aux root 10765 0.0 0.0 76792 1080 ? Ss Jan24 0:58 SCREEN root 10766 0.0 0.0 66212 872 pts/3 Ss Jan24 0:00 /bin/bash root 11833 0.0 0.0 63852 1060 pts/3 S+ 17:17 0:00 /bin/sh ./launch.sh root 11834 437 42.9 4126884 1733348 pts/3 Sl+ 17:17 1190:50 /usr/bin/java -Xms128m -Xmx512m -XX:+UseConcMarkSweepGC -jar /JavaCore/JavaCore.jar root 13127 4.7 1.1 110564 46876 ? Ssl 17:18 12:55 /JavaCore/fetcher.bin root 19392 0.0 0.0 90108 3336 ? Rs 20:35 0:00 sshd: root@pts/1 root 19401 0.0 0.0 66216 1640 pts/1 Ss 20:35 0:00 -bash root 20567 0.0 0.0 90108 412 ? Ss Jan16 1:58 sshd: root@pts/0 root 20569 0.0 0.0 66084 912 pts/0 Ss Jan16 0:00 -bash root 21053 0.0 0.0 63856 28 ? S Jan30 0:00 /bin/sh /usr/bin/WowzaMediaServerd /usr/local/WowzaMediaServer/bin/setenv.sh /var/run/WowzaM root 21054 2.9 10.3 2252652 418468 ? Sl Jan30 314:25 java -Xmx1200M -server -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote=true - root 21915 0.0 0.0 0 0 ? S Feb01 0:00 [pdflush] root 29996 0.0 0.0 76524 1004 pts/0 S+ 14:41 0:00 screen -x Any idea what could this be, or where I should look for more diagnostic information? Thanks.

    Read the article

  • How to set up default schema name in JPA configuration?

    - by Roman
    I found that in hibernate config file we could set up parameter hibernate.default_schema: <hibernate-configuration> <session-factory> ... <property name="hibernate.default_schema">myschema</property> ... </session-factory> </hibernate-configuration> Now I'm using JPA and I want to do the same. Otherwise I have to add parameter schema to each @Table annotation like: @Entity @Table (name = "projectcategory", schema = "SCHEMANAME") public class Category implements Serializable { ... } As I understand this parameter should be somewhere in this part of configuration: <bean id="domainEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="JiraManager"/> <property name="dataSource" ref="domainDataSource"/> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="generateDdl" value="false"/> <property name="showSql" value="false"/> <property name="databasePlatform" value="${hibernate.dialect}"/> </bean> </property> </bean> <bean id="domainDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="${db.driver}" /> <property name="jdbcUrl" value="${datasource.url}" /> <property name="user" value="${datasource.username}" /> <property name="password" value="${datasource.password}" /> <property name="initialPoolSize" value="5"/> <property name="minPoolSize" value="5"/> <property name="maxPoolSize" value="15"/> <property name="checkoutTimeout" value="10000"/> <property name="maxStatements" value="150"/> <property name="testConnectionOnCheckin" value="true"/> <property name="idleConnectionTestPeriod" value="50"/> </bean> ... but I can't find its name in google. Any ideas?

    Read the article

  • WSDLException : An error occurred trying to resolve schema referenced at ...

    - by Stefano
    Hello i'm trying to generate a proxy class from a local wsdl file with eclipse Galileo and axis 2 1.4 on windows xp . My problem is that i get an error due to an imported schema inside the wsdl . The line tha troubles me is : <xsd:import namespace="http://www.w3.org/2005/05/xmlmime" schemaLocation="http://www.w3.org/2005/05/xmlmime"/> i've tried to run the wsdl2java following command: wsdl2java.bat -uri SOAService.wsdl -o D:\temp p test -d xmlbeans -a -s -ns2p -uw and i get the following exception : Exception in thread "main" org.apache.axis2.wsdl.codegen.CodeGenerationException : Error parsing WSDL at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.(CodeGenerat ionEngine.java:156) at org.apache.axis2.wsdl.WSDL2Code.main(WSDL2Code.java:35) at org.apache.axis2.wsdl.WSDL2Java.main(WSDL2Java.java:24) Caused by: javax.wsdl.WSDLException: WSDLException (at /wsdl:definitions/wsdl:ty pes/xsd:schema): faultCode=OTHER_ERROR: An error occurred trying to resolve sche ma referenced at 'http://www.w3.org/2005/05/xmlmime', relative to 'file:/D:/Prog rammi/axis2-1.4/bin/SOAService.wsdl'.: java.net.ConnectException: Connection tim ed out: connect at com.ibm.wsdl.xml.WSDLReaderImpl.parseSchema(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.parseSchema(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.parseTypes(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.parseDefinitions(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at com.ibm.wsdl.xml.WSDLReaderImpl.readWSDL(Unknown Source) at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.readInTheWSDLFile( CodeGenerationEngine.java:288) at org.apache.axis2.wsdl.codegen.CodeGenerationEngine.(CodeGenerat ionEngine.java:111) ... 2 more Caused by: java.net.ConnectException: Connection timed out: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:520) at java.net.Socket.connect(Socket.java:470) at sun.net.NetworkClient.doConnect(NetworkClient.java:157) at sun.net.www.http.HttpClient.openServer(HttpClient.java:388) at sun.net.www.http.HttpClient.openServer(HttpClient.java:523) at sun.net.www.http.HttpClient.(HttpClient.java:231) at sun.net.www.http.HttpClient.New(HttpClient.java:304) at sun.net.www.http.HttpClient.New(HttpClient.java:321) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC onnection.java:813) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne ction.java:765) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection .java:690) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon nection.java:934) at java.net.URL.openStream(URL.java:1007) at com.ibm.wsdl.util.StringUtils.getContentAsInputStream(Unknown Source) i suspect it's due to the system proxy which doesn't let retrieve the xsd to the wsdl2java tool. In fact i can download the file from the browser without problems. There's an option to specify a proxy to wsdl2java or someone has resolved this issue ? For the moment i've downloaded the xsd, added it to the project and changed the wsdl to include the relative file (instead of the remote one) , but i'd prefer to avoid this , because the file is a third party service wsdl. thank you in advance for any hint Stefano

    Read the article

  • In a star schema, are foreign key constraints between facts and dimensions neccessary?

    - by Garett
    I'm getting my first exposure to data warehousing, and I’m wondering is it necessary to have foreign key constraints between facts and dimensions. Are there any major downsides for not having them? I’m currently working with a relational star schema. In traditional applications I’m used to having them, but I started to wonder if they were needed in this case. I’m currently working in a SQL Server 2005 environment.

    Read the article

  • Does an XML schema or DTD exist for PerformancePoint's Xml Metadata?

    - by Athens
    I wrote several XQuery statements to shred existing KPI and Dashboard metadata but I would like to validate my queries by reviewing the corresponding Xml Schema or DTD if it exists. I searched online but could not find what i was looking for. The metadata is stored in Performance Point's back end Sql Server database in the dbo.FCObjects table's SerializedXml column.

    Read the article

  • The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on di

    - by simonsabin
    Are you trying to build a SQL Server database project and getting   The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. We had this recently when trying to build one SSDT solution but not when building another.   Checking the build agent the error was correct that file didn’t exist...(read more)

    Read the article

  • What is "rfcTextOfMessage" value? : Google Apps Email Migration API Developer's Guide

    - by Pari
    I am using Google API to test below code: MailItemService mailItemService = new MailItemService(domain, "Sample Migration Application"); mailItemService.setUserCredentials(userEmail, password); MailItemEntry entry = new MailItemEntry(); entry.Rfc822Msg = new Rfc822MsgElement(rfcTextOfMessage); Referring to this Link . I used Sample Value given for "rfcTextOfMessage". But how to change To,Send and Date values for different mails? Is there any way to get this format? Note: I am using C#

    Read the article

  • How do I use a Rails ActiveRecord migration to insert a primary key into a MySQL database?

    - by Terry Lorber
    I need to create an AR migration for a table of image files. The images are being checked into the source tree, and should act like attachment_fu files. That being the case, I'm creating a hierarchy for them under /public/system. Because of the way attachment_fu generates links, I need to use the directory naming convention to insert primary key values. How do I override the auto-increment in MySQL as well as any Rails magic so that I can do something like this: image = Image.create(:id => 42, :filename => "foo.jpg") image.id #=> 42

    Read the article

  • Configuring OpenLDAP as a Active Directory Proxy

    - by vadensumbra
    We try to set up an Active Directory server for company-wide authentication. Some of the servers that should authenticate against the AD are placed in a DMZ, so we thought of using a LDAP-server as a proxy, so that only 1 server in the DMZ has to connect to the LAN where the AD-server is placed). With some googling it was no problem to configure the slapd (see slapd.conf below) and it seemed to work when using the ldapsearch tool, so we tried to use it in apache2 htaccess to authenticate the user over the LDAP-proxy. And here comes the problem: We found out the username in the AD is stored in the attribute 'sAMAccountName' so we configured it in .htaccess (see below) but the login didn't work. In the syslog we found out that the filter for the ldapsearch was not (like it should be) '(&(objectClass=*)(sAMAccountName=authtest01))' but '(&(objectClass=*)(?=undefined))' which we found out is slapd's way to show that the attribute do not exists or the value is syntactically wrong for this attribute. We thought of a missing schema and found the microsoft.schema (and the .std / .ext ones of it) and tried to include them in the slapd.conf. Which does not work. We found no working schemata so we just picked out the part about the sAMAccountName and build a microsoft.minimal.schema (see below) that we included. Now we get the more precise log in the syslog: Jun 16 13:32:04 breauthsrv01 slapd[21229]: get_ava: illegal value for attributeType sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH base="ou=oraise,dc=int,dc=oraise,dc=de" scope=2 deref=3 filter="(&(objectClass=\*)(?sAMAccountName=authtest01))" Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH attr=sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Using our Apache htaccess directly with the AD via LDAP works though. Anyone got a working setup? Thanks for any help in advance: slapd.conf: allow bind_v2 include /etc/ldap/schema/core.schema ... include /etc/ldap/schema/microsoft.minimal.schema ... backend ldap database ldap suffix "ou=xxx,dc=int,dc=xxx,dc=de" uri "ldap://80.156.177.161:389" acl-bind bindmethod=simple binddn="CN=authtest01,ou=GPO-Test,ou=xxx,dc=int,dc=xxx,dc=de" credentials=xxxxx .htaccess: AuthBasicProvider ldap AuthType basic AuthName "AuthTest" AuthLDAPURL "ldap://breauthsrv01.xxx.de:389/OU=xxx,DC=int,DC=xxx,DC=de?sAMAccountName?sub" AuthzLDAPAuthoritative On AuthLDAPGroupAttribute member AuthLDAPBindDN CN=authtest02,OU=GPO-Test,OU=xxx,DC=int,DC=xxx,DC=de AuthLDAPBindPassword test123 Require valid-user microsoft.minimal.schema: attributetype ( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • Export MS SQL database as *.dbschema

    - by jjczopek
    We have a production database and visual studio 2010 database project. We had to make some changes in database schema. Unfortunately we don't have previous database schema file for production database. Is there a way to export existing database schema as *.dbschema file, preferably from Microsoft SQL Server Management Studio (2008 R2)? This way we could run schema comparison and generate update script.

    Read the article

  • Database Schema for survey polling application with a default choice.

    - by user156814
    I have a survey application, where users can create surveys and give choices for every survey. Other users can choose their answers for the aurvey and then polls are taken to get the results of the survey. I already have the database schema for this Questions id, user_id, category_id, question_text, date_started Answers id, user_id, question_id, choice_id, explanation, date_added Choices id, question_id, choice_text As for now, users can choose their own choice answers to their surveys... but I want to be able to add a default "I dont care" or "I dont know" choice to every survey for people who simply dont care about the topic to take sides or who cant choose. So lets say theres a survey that asks who was a better president, George W. Bush, Bill Clinton, Ronald Reagon, or Richard Nixon... I want to be able to add a default "I dont care" option. I was thinking to just add that extra choice EVERY TIME a user creates a survey, but then I wouldn't have much control over the text for that choice after that survey has been created, and I want to know if theres a better way to do this, like create another table or something Thanks

    Read the article

  • Does this schema sound better suited for a document-oriented data store or relational?

    - by Blaine LaFreniere
    Disclaimer: let me know if this question is better suited for serverfault.com I want to store information on music, specifically: genres artists albums songs This information will be used in a web application, and I want people to be able to see all of the songs associated to an album, and albums associated to an artist, and artists associated to a genre. I'm currently using MySQL, but before I make a decision to switch I want to know: How easy is scaling horizontally? Is it easier to manage than an SQL based solution? Would the above data I want to store be too hard to do schema-free? When I think association, I immediately think RDBMSs; can data be stored in something like CouchDB but still have some kind of association as stated above?

    Read the article

  • Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Trying to drop all tables from my schema with no rows?

    - by Vineet
    I am trying to drop all tables in schema with no rows,but when i am executing this code i am getting an error THis is the code: create or replace procedure tester IS v_count NUMBER; CURSOR emp_cur IS select table_name from user_tables; BEGIN FOR emp_rec_cur IN emp_cur LOOP EXECUTE IMMEDIATE 'select count(*) from '|| emp_rec_cur.table_name INTO v_count ; IF v_count =0 THEN EXECUTE IMMEDIATE 'DROP TABLE '|| emp_rec_cur.table_name; END IF; END LOOP; END tester; ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "identifier": expecting one of: "badfile, byteordermark, characterset, data, delimited, discardfile, exit, fields, fixed, load, logfile, nodiscardfile, nobadfile, nologfile, date_cache, processing, readsize, string, skip, variable" KUP-01008: the bad identifier was: DELIMETED KUP-01007: at line 1 column 9 ORA-06512: at "SYS.ORACLE_LOADER", line 14 ORA-06512: at line 1 ORA-06512: at "SCOTT.TESTER", line 9 ORA-06512: at line 1

    Read the article

  • Where to put default-servlet-handler in Spring MVC configuration

    - by gigadot
    In my web.xml, the default servlet mapping, i.e. /, is mapped to Spring dispatcher. In my Spring dispatcher configuration, I have DefaultAnnotationHandlerMapping, ControllerClassNameHandlerMapping and AnnotationMethodHandlerAdapter which allows me to map url to controllers either by its class name or its @Requestmapping annotation. However, there are some static resources under the web root which I also want spring dispatcher to serve using default servlet. According to Spring documentation, this can be done using <mvc:default-servlet-handler/> tag. In the configuration below, there are 4 candidate locations that I marked which are possible to insert this tag. Inserting the tag in different location causes the dispatcher to behave differently as following : Case 1 : If I insert it at location 1, the dispatcher will no longer be able to handle mapping by the @RequestMapping and controller class name but it will be serving the static content normally. Cas 2, 3 : It will be able to handle mapping by the @RequestMapping and controller class name as well as serving the static content if other mapping cannot be done successfully. Case 4 : It will not be able to serve the static contents. Therefore, Case 2 and 3 are desirable .According to Spring documentation, this tag configures a handler which precedence order is given to lowest so why the position matters? and Which is the best position to put this tag? <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd"> <context:annotation-config/> <context:component-scan base-package="webapp.controller"/> <!-- Location 1 --> <!-- Enable annotation-based controllers using @Controller annotations --> <bean id="annotationUrlMapping" class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"/> <!-- Location 2 --> <bean id="controllerClassNameHandlerMapping" class="org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping"/> <!-- Location 3 --> <bean id="annotationMethodHandlerAdapter" class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"/> <!-- Location 4 --> <mvc:default-servlet-handler/> <!-- All views are JSPs loaded from /WEB-INF/jsp --> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> </bean> </beans>

    Read the article

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • dbms_xmlschema fail to validate with complexType

    - by Andrew
    Preface: This works on one Oracle 11gR1 (Solaris 64) database and not on a second and we can't figure out the difference between the two databases. Somehow the complexType causes the validation to fail with this error: ORA-31154: invalid XML document ORA-19202: Error occurred in XML processing LSX-00200: element "shiporder" not empty ORA-06512: at "SYS.XMLTYPE", line 354 ORA-06512: at line 13 But the schema is valid (passes this online test: http://www.xmlme.com/Validator.aspx) -- Cleanup any existing schema begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; -- Define the problem schema (adapted from http://www.w3schools.com/schema/schema_example.asp) begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder"> <xs:complexType> <xs:sequence> <xs:element name="orderperson" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>',owner=>'SCOTT'); end; -- Attempt to validate declare bbb xmltype; begin bbb := XMLType('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> <orderperson>John Smith</orderperson> </shiporder>'); XMLType.schemaValidate(bbb); end; Now if I gut the schema definition and leave only a string in the XML then the validation passes: begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder" type="xs:string"/> </xs:schema>',owner=>'SCOTT'); end; DECLARE xml XMLTYPE; BEGIN xml := XMLTYPE('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> John Smith </shiporder>'); XMLTYPE.schemaValidate(xml); END;

    Read the article

  • Difference in DocumentBuilder.parse when using JRE 1.5 and JDK 1.6

    - by dhiller
    Recently at last we have switched our projects to Java 1.6. When executing the tests I found out that using 1.6 a SAXParseException is not thrown which has been thrown using 1.5. Below is my test code to demonstrate the problem. import java.io.StringReader; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.SchemaFactory; import org.junit.Test; import org.xml.sax.InputSource; import org.xml.sax.SAXParseException; /** * Test class to demonstrate the difference between JDK 1.5 to JDK 1.6. * * Seen on Linux: * * <pre> * #java version "1.6.0_18" * Java(TM) SE Runtime Environment (build 1.6.0_18-b07) * Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) * </pre> * * Seen on OSX: * * <pre> * java version "1.6.0_17" * Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) * Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) * </pre> * * @author dhiller (creator) * @author $Author$ (last editor) * @version $Revision$ * @since 12.03.2010 11:32:31 */ public class TestXMLValidation { /** * Tests the schema validation of an XML against a simple schema. * * @throws Exception * Falls ein Fehler auftritt * @throws junit.framework.AssertionFailedError * Falls eine Unit-Test-Pruefung fehlschlaegt */ @Test(expected = SAXParseException.class) public void testValidate() throws Exception { final StreamSource schema = new StreamSource( new StringReader( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + "<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" " + "elementFormDefault=\"qualified\" xmlns:xsd=\"undefined\">" + "<xs:element name=\"Test\"/>" + "</xs:schema>" ) ); final String xml = "<Test42/>"; final DocumentBuilderFactory newFactory = DocumentBuilderFactory.newInstance(); newFactory.setSchema( SchemaFactory.newInstance( "http://www.w3.org/2001/XMLSchema" ).newSchema( schema ) ); final DocumentBuilder documentBuilder = newFactory.newDocumentBuilder(); documentBuilder.parse( new InputSource( new StringReader( xml ) ) ); } } When using a JVM 1.5 the test passes, on 1.6 it fails with "Expected exception SAXParseException". The Javadoc of the DocumentBuilderFactory.setSchema(Schema) Method says: When errors are found by the validator, the parser is responsible to report them to the user-specified ErrorHandler (or if the error handler is not set, ignore them or throw them), just like any other errors found by the parser itself. In other words, if the user-specified ErrorHandler is set, it must receive those errors, and if not, they must be treated according to the implementation specific default error handling rules. The Javadoc of the DocumentBuilder.parse(InputSource) method says: BTW: I tried setting an error handler via setErrorHandler, but there still is no exception. Now my question: What has changed to 1.6 that prevents the schema validation to throw a SAXParseException? Is it related to the schema or to the xml that I tried to parse?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >