Search Results

Search found 5153 results on 207 pages for 'unique ptr'.

Page 52/207 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • How to highlight non-rectengular hotspots?

    - by HuseyinUslu
    So my question is highly related to Creating non-rectangular hotspots and detecting clicks. Yet again, I've irregular hot-spots (think the game Risk). So basically, we can detect clicks on these hot-spots easily using color key mapping as discussed in above question which I don't have any problems implementing (which is also covered here in details). The problem is about highlighting these irreguar hotspots. So let me explain the question a bit more - the above color key mapping guide uses this as a world map; then the author color-maps the imaginary countries; which we can now detect the country the pointer is over. In the same article author mentions outlining countries on mouse-over; though to get the effect, he creates unique border assets for each country - like; So for the game I'm working on I'm using the same color-key mapping idea to detect hot-spots, but I didn't like the way of highlighting hot-spots. Coloring all the hot-spots is already a great work for me - as I've 25+ hot-spots for each map - further more the need to have 25 unique border/highlight asset per hot-spot doesn't sound right. Anyone have a better idea/suggestion on highlighting these hot-spots?

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • Linux - Only first virtual interface can ping external gateway

    - by husvar
    I created 3 virtual interfaces with different mac addresses all linked to the same physical interface. I see that they successfully arp for the gw and they can ping (the request is coming in the packet capture in wireshark). However the ping utility does not count the responses. Does anyone knows the issue? I am running Ubuntu 14.04 in a VmWare. root@ubuntu:~# ip link sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip addr sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:febc:fc8b/64 scope link valid_lft forever preferred_lft forever root@ubuntu:~# ip route sh root@ubuntu:~# ip link add link eth0 eth0.1 addr 00:00:00:00:00:11 type macvlan root@ubuntu:~# ip link add link eth0 eth0.2 addr 00:00:00:00:00:22 type macvlan root@ubuntu:~# ip link add link eth0 eth0.3 addr 00:00:00:00:00:33 type macvlan root@ubuntu:~# ip -4 link sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff 18: eth0.1@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:11 brd ff:ff:ff:ff:ff:ff 19: eth0.2@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:22 brd ff:ff:ff:ff:ff:ff 20: eth0.3@eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default link/ether 00:00:00:00:00:33 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip -4 addr sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever root@ubuntu:~# ip -4 route sh root@ubuntu:~# dhclient -v eth0.1 Internet Systems Consortium DHCP Client 4.2.4 Copyright 2004-2012 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0.1/00:00:00:00:00:11 Sending on LPF/eth0.1/00:00:00:00:00:11 Sending on Socket/fallback DHCPDISCOVER on eth0.1 to 255.255.255.255 port 67 interval 3 (xid=0x568eac05) DHCPREQUEST of 192.168.1.145 on eth0.1 to 255.255.255.255 port 67 (xid=0x568eac05) DHCPOFFER of 192.168.1.145 from 192.168.1.254 DHCPACK of 192.168.1.145 from 192.168.1.254 bound to 192.168.1.145 -- renewal in 1473 seconds. root@ubuntu:~# dhclient -v eth0.2 Internet Systems Consortium DHCP Client 4.2.4 Copyright 2004-2012 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0.2/00:00:00:00:00:22 Sending on LPF/eth0.2/00:00:00:00:00:22 Sending on Socket/fallback DHCPDISCOVER on eth0.2 to 255.255.255.255 port 67 interval 3 (xid=0x21e3114e) DHCPREQUEST of 192.168.1.146 on eth0.2 to 255.255.255.255 port 67 (xid=0x21e3114e) DHCPOFFER of 192.168.1.146 from 192.168.1.254 DHCPACK of 192.168.1.146 from 192.168.1.254 bound to 192.168.1.146 -- renewal in 1366 seconds. root@ubuntu:~# dhclient -v eth0.3 Internet Systems Consortium DHCP Client 4.2.4 Copyright 2004-2012 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0.3/00:00:00:00:00:33 Sending on LPF/eth0.3/00:00:00:00:00:33 Sending on Socket/fallback DHCPDISCOVER on eth0.3 to 255.255.255.255 port 67 interval 3 (xid=0x11dc5f03) DHCPREQUEST of 192.168.1.147 on eth0.3 to 255.255.255.255 port 67 (xid=0x11dc5f03) DHCPOFFER of 192.168.1.147 from 192.168.1.254 DHCPACK of 192.168.1.147 from 192.168.1.254 bound to 192.168.1.147 -- renewal in 1657 seconds. root@ubuntu:~# ip -4 link sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 00:0c:29:bc:fc:8b brd ff:ff:ff:ff:ff:ff 18: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 00:00:00:00:00:11 brd ff:ff:ff:ff:ff:ff 19: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 00:00:00:00:00:22 brd ff:ff:ff:ff:ff:ff 20: eth0.3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default link/ether 00:00:00:00:00:33 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip -4 addr sh 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 18: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default inet 192.168.1.145/24 brd 192.168.1.255 scope global eth0.1 valid_lft forever preferred_lft forever 19: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default inet 192.168.1.146/24 brd 192.168.1.255 scope global eth0.2 valid_lft forever preferred_lft forever 20: eth0.3@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default inet 192.168.1.147/24 brd 192.168.1.255 scope global eth0.3 valid_lft forever preferred_lft forever root@ubuntu:~# ip -4 route sh default via 192.168.1.254 dev eth0.1 192.168.1.0/24 dev eth0.1 proto kernel scope link src 192.168.1.145 192.168.1.0/24 dev eth0.2 proto kernel scope link src 192.168.1.146 192.168.1.0/24 dev eth0.3 proto kernel scope link src 192.168.1.147 root@ubuntu:~# arping -c 5 -I eth0.1 192.168.1.254 ARPING 192.168.1.254 from 192.168.1.145 eth0.1 Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 6.936ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.986ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 0.654ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 5.137ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.426ms Sent 5 probes (1 broadcast(s)) Received 5 response(s) root@ubuntu:~# arping -c 5 -I eth0.2 192.168.1.254 ARPING 192.168.1.254 from 192.168.1.146 eth0.2 Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 5.665ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 3.753ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 16.500ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 3.287ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 32.438ms Sent 5 probes (1 broadcast(s)) Received 5 response(s) root@ubuntu:~# arping -c 5 -I eth0.3 192.168.1.254 ARPING 192.168.1.254 from 192.168.1.147 eth0.3 Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 4.422ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.429ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.321ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 40.423ms Unicast reply from 192.168.1.254 [58:98:35:57:a0:70] 2.268ms Sent 5 probes (1 broadcast(s)) Received 5 response(s) root@ubuntu:~# tcpdump -n -i eth0.1 -v & [1] 5317 root@ubuntu:~# ping -c5 -q -I eth0.1 192.168.1.254 PING 192.168.1.254 (192.168.1.254) from 192.168.1.145 eth0.1: 56(84) bytes of data. tcpdump: listening on eth0.1, link-type EN10MB (Ethernet), capture size 65535 bytes 13:18:37.612558 IP (tos 0x0, ttl 64, id 2595, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.145 > 192.168.1.254: ICMP echo request, id 5318, seq 2, length 64 13:18:37.618864 IP (tos 0x68, ttl 64, id 14493, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.145: ICMP echo reply, id 5318, seq 2, length 64 13:18:37.743650 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:38.134997 IP (tos 0x0, ttl 128, id 23547, offset 0, flags [none], proto UDP (17), length 229) 192.168.1.86.138 > 192.168.1.255.138: NBT UDP PACKET(138) 13:18:38.614580 IP (tos 0x0, ttl 64, id 2596, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.145 > 192.168.1.254: ICMP echo request, id 5318, seq 3, length 64 13:18:38.793479 IP (tos 0x68, ttl 64, id 14495, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.145: ICMP echo reply, id 5318, seq 3, length 64 13:18:39.151282 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:39.615612 IP (tos 0x0, ttl 64, id 2597, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.145 > 192.168.1.254: ICMP echo request, id 5318, seq 4, length 64 13:18:39.746981 IP (tos 0x68, ttl 64, id 14496, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.145: ICMP echo reply, id 5318, seq 4, length 64 --- 192.168.1.254 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4008ms rtt min/avg/max/mdev = 2.793/67.810/178.934/73.108 ms root@ubuntu:~# killall tcpdump >> /dev/null 2>&1 9 packets captured 12 packets received by filter 0 packets dropped by kernel [1]+ Done tcpdump -n -i eth0.1 -v root@ubuntu:~# tcpdump -n -i eth0.2 -v & [1] 5320 root@ubuntu:~# ping -c5 -q -I eth0.2 192.168.1.254 PING 192.168.1.254 (192.168.1.254) from 192.168.1.146 eth0.2: 56(84) bytes of data. tcpdump: listening on eth0.2, link-type EN10MB (Ethernet), capture size 65535 bytes 13:18:41.536874 ARP, Ethernet (len 6), IPv4 (len 4), Reply 192.168.1.254 is-at 58:98:35:57:a0:70, length 46 13:18:41.536933 IP (tos 0x0, ttl 64, id 2599, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 1, length 64 13:18:41.539255 IP (tos 0x68, ttl 64, id 14507, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 1, length 64 13:18:42.127715 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:42.511725 IP (tos 0x0, ttl 64, id 2600, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 2, length 64 13:18:42.514385 IP (tos 0x68, ttl 64, id 14527, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 2, length 64 13:18:42.743856 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:43.511727 IP (tos 0x0, ttl 64, id 2601, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 3, length 64 13:18:43.513768 IP (tos 0x68, ttl 64, id 14528, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 3, length 64 13:18:43.637598 IP (tos 0x0, ttl 128, id 23551, offset 0, flags [none], proto UDP (17), length 225) 192.168.1.86.17500 > 255.255.255.255.17500: UDP, length 197 13:18:43.641185 IP (tos 0x0, ttl 128, id 23552, offset 0, flags [none], proto UDP (17), length 225) 192.168.1.86.17500 > 192.168.1.255.17500: UDP, length 197 13:18:43.641201 IP (tos 0x0, ttl 128, id 23553, offset 0, flags [none], proto UDP (17), length 225) 192.168.1.86.17500 > 255.255.255.255.17500: UDP, length 197 13:18:43.743890 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 13:18:44.510758 IP (tos 0x0, ttl 64, id 2602, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 4, length 64 13:18:44.512892 IP (tos 0x68, ttl 64, id 14538, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 4, length 64 13:18:45.510794 IP (tos 0x0, ttl 64, id 2603, offset 0, flags [DF], proto ICMP (1), length 84) 192.168.1.146 > 192.168.1.254: ICMP echo request, id 5321, seq 5, length 64 13:18:45.519701 IP (tos 0x68, ttl 64, id 14539, offset 0, flags [none], proto ICMP (1), length 84) 192.168.1.254 > 192.168.1.146: ICMP echo reply, id 5321, seq 5, length 64 13:18:49.287554 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:50.013463 IP (tos 0x0, ttl 255, id 50737, offset 0, flags [DF], proto UDP (17), length 73) 192.168.1.146.5353 > 224.0.0.251.5353: 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local. (45) 13:18:50.218874 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:51.129961 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:52.197074 IP6 (hlim 255, next-header UDP (17) payload length: 53) 2001:818:d812:da00:200:ff:fe00:22.5353 > ff02::fb.5353: [udp sum ok] 0 [2q] PTR (QM)? _ipps._tcp.local. PTR (QM)? _ipp._tcp.local. (45) 13:18:54.128240 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 --- 192.168.1.254 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4000ms root@ubuntu:~# killall tcpdump >> /dev/null 2>&1 13:18:54.657731 IP6 (class 0x68, hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::5a98:35ff:fe57:e070 > ff02::1:ff6b:e9b4: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001:818:d812:da00:8ae3:abff:fe6b:e9b4 source link-address option (1), length 8 (1): 58:98:35:57:a0:70 13:18:54.743174 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.87 tell 192.168.1.86, length 46 25 packets captured 26 packets received by filter 0 packets dropped by kernel [1]+ Done tcpdump -n -i eth0.2 -v root@ubuntu:~# tcpdump -n -i eth0.3 icmp & [1] 5324 root@ubuntu:~# ping -c5 -q -I eth0.3 192.168.1.254 PING 192.168.1.254 (192.168.1.254) from 192.168.1.147 eth0.3: 56(84) bytes of data. tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0.3, link-type EN10MB (Ethernet), capture size 65535 bytes 13:18:56.373434 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 1, length 64 13:18:57.372116 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 2, length 64 13:18:57.381263 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 2, length 64 13:18:58.371141 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 3, length 64 13:18:58.373275 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 3, length 64 13:18:59.371165 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 4, length 64 13:18:59.373259 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 4, length 64 13:19:00.371211 IP 192.168.1.147 > 192.168.1.254: ICMP echo request, id 5325, seq 5, length 64 13:19:00.373278 IP 192.168.1.254 > 192.168.1.147: ICMP echo reply, id 5325, seq 5, length 64 --- 192.168.1.254 ping statistics --- 5 packets transmitted, 1 received, 80% packet loss, time 4001ms rtt min/avg/max/mdev = 13.666/13.666/13.666/0.000 ms root@ubuntu:~# killall tcpdump >> /dev/null 2>&1 9 packets captured 10 packets received by filter 0 packets dropped by kernel [1]+ Done tcpdump -n -i eth0.3 icmp root@ubuntu:~# arp -n Address HWtype HWaddress Flags Mask Iface 192.168.1.254 ether 58:98:35:57:a0:70 C eth0.1 192.168.1.254 ether 58:98:35:57:a0:70 C eth0.2 192.168.1.254 ether 58:98:35:57:a0:70 C eth0.3

    Read the article

  • Windows 8.1 IRQL_NOT_LESS_OR_EQUAL with Asus PCE-n53

    - by JArsenault89
    I saw the following question, and it is the exact same problem on my machine, I have tracked it to the ASUS PCE-n53 wireless card in my desktop. Does anyone know of a workaround? Windows 8.1 RTM installation crashes The adapter worked fine in windows 8... any ideas? EDIT: Crash Dump Analysis * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 0000000000000000, memory referenced Arg2: 0000000000000002, IRQL Arg3: 0000000000000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: fffff801ef4f1316, address which referenced memory Debugging Details: WRITE_ADDRESS: 0000000000000000 CURRENT_IRQL: 2 FAULTING_IP: nt!KeReleaseSpinLock+16 fffff801`ef4f1316 f048832100 lock and qword ptr [rcx],0 DEFAULT_BUCKET_ID: WIN8_DRIVER_FAULT BUGCHECK_STR: AV PROCESS_NAME: System ANALYSIS_VERSION: 6.3.9600.16384 (debuggers(dbg).130821-1623) amd64fre TRAP_FRAME: ffffd00020d45550 -- (.trap 0xffffd00020d45550) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000001 rbx=0000000000000000 rcx=0000000000000000 rdx=0000000055920200 rsi=0000000000000000 rdi=0000000000000000 rip=fffff801ef4f1316 rsp=ffffd00020d456e0 rbp=ffffd00020d45768 r8=0000000055920222 r9=0000000035930000 r10=0000000055920222 r11=ffffd00020d456a8 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!KeReleaseSpinLock+0x16: fffff801ef4f1316 f048832100 lock and qword ptr [rcx],0 ds:0000000000000000=???????????????? Resetting default scope LOCK_ADDRESS: fffff801ef6da360 -- (!locks fffff801ef6da360) Resource @ nt!PiEngineLock (0xfffff801ef6da360) Exclusively owned Contention Count = 6 Threads: ffffe000010ff040-01<* 1 total locks, 1 locks currently held PNP_TRIAGE: Lock address : 0xfffff801ef6da360 Thread Count : 1 Thread address: 0xffffe000010ff040 Thread wait : 0x1fbe LAST_CONTROL_TRANSFER: from fffff801ef5647e9 to fffff801ef558ca0 STACK_TEXT: ffffd00020d45408 fffff801ef5647e9 : 000000000000000a 0000000000000000 0000000000000002 0000000000000001 : nt!KeBugCheckEx ffffd00020d45410 fffff801ef56303a : 0000000000000001 0000000000000000 ffff0c83e3e25300 ffffd00020d45550 : nt!KiBugCheckDispatch+0x69 ffffd00020d45550 fffff801ef4f1316 : 00000000000a5890 0000000000000001 0000000000000000 ffffe00004c00000 : nt!KiPageFault+0x23a ffffd00020d456e0 fffff80003b430ad : 00000000000afe80 ffffe00004c00000 00000000000a2f80 0000000035720000 : nt!KeReleaseSpinLock+0x16 ffffd00020d45710 fffff80003ac249f : ffffe00004c00000 00000000000000a8 ffffe00004c85050 0000000000000800 : netr28x+0x840ad ffffd00020d457b0 fffff80000b76475 : ffffd00020d459e8 ffffd00020d459f0 ffffe00004ac2006 ffffe00004ac21a0 : netr28x+0x349f ffffd00020d459a0 fffff80000baa248 : ffffe00004ac2eb8 0000000000000000 ffffe00000000000 ffffe00004ac21a0 : ndis!ndisMInvokeInitialize+0x39 ffffd00020d459e0 fffff80000b74784 : 0000000000000050 ffffe00004907ba0 0000000000000000 01cecbbc328e6cde : ndis!ndisMInitializeAdapter+0x4dc ffffd00020d46050 fffff80000b74d3d : 0000000000000050 ffffe0000443e770 ffffc00000951480 ffffe00004ac21a0 : ndis!ndisInitializeAdapter+0x60 ffffd00020d460a0 fffff80000b74c14 : ffffe00004ac21a0 ffffe00004ac2050 ffffe000047ec2a0 0000000000000000 : ndis!ndisPnPStartDevice+0x89 ffffd00020d460f0 fffff80000b87695 : ffffe00004ac21a0 ffffe00004ac21a0 ffffd00020d461b0 ffffe000047ec2a0 : ndis!ndisStartDeviceSynchronous+0x58 ffffd00020d46140 fffff80000b6a760 : ffffe000047ec2a0 ffffe00004ac21a0 0000000000000000 0000000000000000 : ndis!ndisPnPIrpStartDevice+0x13471 ffffd00020d46170 fffff8000032576c : ffffe00004b11501 ffffe00004b11570 0000000000000001 fffff80000325880 : ndis!ndisPnPDispatch+0x140 ffffd00020d461e0 fffff8000030b40a : ffffe000047ec2a0 0000000000000106 ffffd00020d462f0 ffffe00004b116c0 : Wdf01000!FxPkgFdo::PnpSendStartDeviceDownTheStackOverload+0xe8 ffffd00020d46250 fffff80000305942 : 0000000000000106 ffffd00020d462f0 0000000000000105 ffffd00020d464d0 : Wdf01000!FxPkgPnp::PnpEventInitStarting+0xa ffffd00020d46280 fffff80000305a5a : ffffe00004b116c8 0000000000000002 ffffe00004b11570 ffffe00004b11600 : Wdf01000!FxPkgPnp::PnpEnterNewState+0x102 ffffd00020d46310 fffff80000305bc4 : 0000000000000000 ffffd00020d46400 ffffe00004b116a0 0000000000000000 : Wdf01000!FxPkgPnp::PnpProcessEventInner+0xc2 ffffd00020d46390 fffff8000030c27a : 0000000000000000 ffffe00004b11570 0000000000000000 ffffe00004b11570 : Wdf01000!FxPkgPnp::PnpProcessEvent+0xe4 ffffd00020d46430 fffff80000300936 : ffffe00004b11570 ffffd00020d464c0 0000000000000000 ffffe00004a0e630 : Wdf01000!FxPkgPnp::_PnpStartDevice+0x1e ffffd00020d46460 fffff800002fba18 : ffffe000047ec2a0 ffffe000047ec2a0 0000000000000000 ffffe0000486f020 : Wdf01000!FxPkgPnp::Dispatch+0xd2 ffffd00020d464d0 fffff801ef838796 : 0000000000000000 fffff801ef6aa101 0000000000000000 ffffd000208aa180 : Wdf01000!FxDevice::DispatchWithLock+0x7d8 ffffd00020d465b0 fffff801ef4d5bad : ffffe000011dc3a0 ffffd00020d46659 0000000000000000 fffff801ef7f5ba4 : nt!PnpAsynchronousCall+0x102 ffffd00020d465f0 fffff801ef838e57 : ffffe000011db8d0 ffffe000011db8d0 ffffe00004a8d060 ffffc00002b11200 : nt!PnpStartDevice+0xc5 ffffd00020d466c0 fffff801ef838fe7 : ffffe000011db8d0 ffffe000011db8d0 0000000000000000 ffffe000011db8d0 : nt!PnpStartDeviceNode+0x147 ffffd00020d46790 fffff801ef7fd19e : ffffe000011db8d0 0000000000000001 0000000000000001 ffffe00000000001 : nt!PipProcessStartPhase1+0x53 ffffd00020d467d0 fffff801ef897b17 : ffffe000011db8d0 0000000000000001 0000000000000000 fffff801ef7ef7b2 : nt!PipProcessDevNodeTree+0x3ce ffffd00020d46a50 fffff801ef4f5033 : 0000000100000003 0000000000000000 0000000000000000 0000000000000000 : nt!PiRestartDevice+0xaf ffffd00020d46aa0 fffff801ef44565d : fffff801ef4f4c90 ffffd00020d46bd0 0000000000000000 ffffe00004a10170 : nt!PnpDeviceActionWorker+0x3a3 ffffd00020d46b50 fffff801ef4eec80 : 0000000000000000 ffffe000010ff040 ffffe000010ff040 ffffe0000035c900 : nt!ExpWorkerThread+0x2b5 ffffd00020d46c00 fffff801ef55f2c6 : ffffd00020472180 ffffe000010ff040 ffffe00000608040 ffffc00000002710 : nt!PspSystemThreadStartup+0x58 ffffd00020d46c60 0000000000000000 : ffffd00020d47000 ffffd00020d41000 0000000000000000 0000000000000000 : nt!KiStartSystemThread+0x16 STACK_COMMAND: kb FOLLOWUP_IP: netr28x+840ad fffff800`03b430ad 4533e4 xor r12d,r12d SYMBOL_STACK_INDEX: 4 SYMBOL_NAME: netr28x+840ad FOLLOWUP_NAME: MachineOwner MODULE_NAME: netr28x IMAGE_NAME: netr28x.sys DEBUG_FLR_IMAGE_TIMESTAMP: 51de7a8d FAILURE_BUCKET_ID: AV_netr28x+840ad BUCKET_ID: AV_netr28x+840ad ANALYSIS_SOURCE: KM FAILURE_ID_HASH_STRING: km:av_netr28x+840ad FAILURE_ID_HASH: {a1f86ced-f566-ac23-afeb-1aa88ea5ab8f} Followup: MachineOwner

    Read the article

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • OpenLDAP with StartTLS broken on Debian Lenny

    - by mr.zog
    I'm trying to get OpenLDAP on Lenny to work with StartTLS. I have a Fedora 13 machine which I'm using as a client for testing. So far the Fedora client is ignoring the 'host' directive in /etc/ldap.conf when I try to connect using ldapsearch. The client wants to connect to 127.0.0.1:389 even if I specify -H ldaps://server.name on when using ldapsearch. /etc/ldap.conf on the client machine is in mode 444. But even when I try connecting locally from an ssh session, I see errors like this: ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1) Someone hit me with a cluebat, plz. Update: you must use ~/.ldaprc for settings such as 'host'. Also, I just used nmap against the ldap server and it showed 636 and 389 in an open state. Here's what prints to screen when I try to connect with, ldapsearch -ZZ –x '(objectclass=*)'+ -d -1 ldap_create ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 192.168.10.41:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 192.168.10.41:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdb8 end=0x9bdbdd7 len=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ber_scanf fmt ({) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdbd end=0x9bdbdd7 len=26 0000: 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e w...1.3.6.1.4.1. 0010: 31 34 36 36 2e 32 30 30 33 37 1466.20037 ber_flush2: 31 bytes to sd 3 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_write: want=31, written=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_result ld 0x9bd3050 msgid 1 wait4msg ld 0x9bd3050 msgid 1 (infinite timeout) wait4msg continue ld 0x9bd3050 msgid 1 all 1 ** ld 0x9bd3050 Connections: * host: 192.168.10.41 port: 636 (default) refcnt: 2 status: Connected last used: Sun Jun 6 12:54:05 2010 ** ld 0x9bd3050 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x9bd3050 request count 1 (abandoned 0) ** ld 0x9bd3050 Response Queue: Empty ld 0x9bd3050 response count 0 ldap_chkResponseList ld 0x9bd3050 msgid 1 all 1 ldap_chkResponseList returns ld 0x9bd3050 NULL ldap_int_select read1msg: ld 0x9bd3050 msgid 1 all 1 ber_get_next ldap_read: want=8, got=0 ber_get_next failed. ldap_err2string ldap_start_tls: Can't contact LDAP server (-1)

    Read the article

  • Mail server configuration

    - by Rashid Iqbal
    I want to configure mail server at my office. for this purpose I purchase on live IP and ask the ISP to set a ptr against that Live IP. in response I get the email from ISP in which three entries listed. as shown below: Live IP: xxx.xxx.xx.xxx ns1.xxx.net.xx ns2.xxx.net.xx now please help to setup at my end.

    Read the article

  • Why can host and nslookup resolve a name but dig cannot?

    - by musashiXXX
    Can anyone tell me why this is happening? I can resolve a hostname using host and/or nslookup but forward lookups do not work with dig; reverse lookups do: musashixxx@box:~$ host someserver someserver.somenet.internal has address 192.168.0.252 musashixxx@box:~$ host 192.168.0.252 252.0.168.192.in-addr.arpa domain name pointer someserver.somenet.internal. musashixxx@box:~$ nslookup someserver Server: 192.168.0.253 Address: 192.168.0.253#53 Name: someserver.somenet.internal Address: 192.168.0.252 musashixxx@box:~$ nslookup 192.168.0.252 Server: 192.168.0.253 Address: 192.168.0.253#53 252.0.168.192.in-addr.arpa name = someserver.somenet.internal. musashixxx@box:~$ dig someserver ; <<>> DiG 9.8.1-P1 <<>> someserver ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 55306 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;someserver. IN A ;; Query time: 0 msec ;; SERVER: 192.168.0.253#53(192.168.0.253) ;; WHEN: Wed Oct 3 15:47:38 2012 ;; MSG SIZE rcvd: 27 musashixxx@box:~$ dig -x 192.168.0.252 ; <<>> DiG 9.8.1-P1 <<>> -x 192.168.0.252 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28126 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;252.0.168.192.in-addr.arpa. IN PTR ;; ANSWER SECTION: 252.0.168.192.in-addr.arpa. 3600 IN PTR someserver.somenet.internal. ;; Query time: 0 msec ;; SERVER: 192.168.0.253#53(192.168.0.253) ;; WHEN: Wed Oct 3 15:49:11 2012 ;; MSG SIZE rcvd: 86 Here's what my resolv.conf looks like: nameserver 192.168.0.253 search somenet.internal Is this behavior normal? Any thoughts?

    Read the article

  • DNS works only with ip but does not work with NS CentOS + Bind9

    - by Borislav Yordanov
    I am having a headache with DNS. Lets say my public IP is 1.2.3.4, my local IP is 192.168.0.10 and my domain is example.com I am running CentOS on a virtual machine (Parallels Desktop for Mac) with a LAN card reserved for it, so it gets Ip directly from the router. I have ports 80,443,53 forwarded to 192.168.0.10. Both Mac OS and CentOs firewalls are Off. The strange is when I type dig @1.2.3.4 example.com from my other PC I get: ; <<>> DiG 9.8.3-P1 <<>> @1.2.3.4 example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16941 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;example.com. IN A ;; ANSWER SECTION: example.com. 86400 IN A 1.2.3.4 ;; AUTHORITY SECTION: example.com. 86400 IN NS ns2.example.com. example.com. 86400 IN NS ns1.example.com. ;; ADDITIONAL SECTION: ns1.example.com. 86400 IN A 1.2.3.4 ns2.example.com. 86400 IN A 1.2.3.4 ;; Query time: 8 msec ;; SERVER: 1.2.3.4#53(1.2.3.4) ;; WHEN: Sat Nov 2 09:37:36 2013 ;; MSG SIZE rcvd: 109 but when i type: dig @ns1.example.com example.com it waits a few seconds and returns dig: couldn't get address for 'ns1.dsht.in': not found This is my config file: /etc/named.conf options { listen-on-v6 { none; }; directory"/var/named"; dump-file"/var/named/data/cache_dump.db"; statistics-file"/var/named/data/named_stats.txt"; memstatistics-file"/var/named/data/named_mem_stats.txt"; allow-query{ localhost; 192.168.0.0/24; }; allow-transfer { localhost; 192.168.0.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; # change all from here view "internal" { match-clients { localhost; 192.168.0.0/24; }; zone "." IN { type hint; file "named.ca"; }; zone "example.com" IN { type master; file "example.com.zone"; allow-update { none; }; }; zone "0.168.192.in-addr.arpa" IN { type master; file "0.168.192.in-addr.arpa"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; }; view "external" { match-clients { any; }; allow-query { any; }; recursion no; zone "example.com" IN { type master; file "example.com.zone"; allow-update { none; }; }; zone "4.3.2.1.in-addr.arpa" IN { type master; file "4.3.2.1.in-addr.arpa"; allow-update { none; }; }; }; /var/named/exmaple.com.zone $TTL 86400 @ IN SOA ns1.example.com. host.example.com. ( 2013042201 ;Serial 3600 ;Refresh 1800 ;Retry 604800 ;Expire 86400 ;Minimum TTL ) ; Specify our two nameservers IN NS ns1.example.com. IN NS ns2.example.com. ; Resolve nameserver hostnames to IP, replace with your two droplet IP addresses. ns1 IN A 1.2.3.4 ns2 IN A 1.2.3.4 ; Define hostname -> IP pairs which you wish to resolve @ IN A 1.2.3.4 IN A 1.2.3.4 www IN A 1.2.3.4 server2 IN A 192.168.0.2 * IN A 1.2.3.4 /var/named/4.3.2.1.in-addr.arpa $TTL 2d ; 172800 seconds $ORIGIN 4.3.2.1.IN-ADDR.ARPA. @ IN SOA ns1.example.com. host.example.com. ( 2013010304 ; serial number 3h ; refresh 15m ; update retry 3w ; expiry 3h ; nx = nxdomain ttl ) IN NS ns1.example.com. IN NS ns2.example.com. IN PTR example.com. ; etc /var/named/0.168.192.in-addr.arpa $TTL 2d ; 172800 seconds $ORIGIN 0.168.192.IN-ADDR.ARPA. @ IN SOA ns1.example.com. host.example.com. ( 2013010304 ; serial number 3h ; refresh 15m ; update retry 3w ; expiry 3h ; nx = nxdomain ttl ) IN NS ns1.example.com. IN NS ns2.example.com. 10 IN PTR example.com. 2 IN PTR server2.example.com ; etc I will be very glad if someone can help me. Thank you in advance

    Read the article

  • 220 **** smtp banner while telnet mail server on a Windows XP , hmail server linksys WRT120 n router

    - by panindra
    we have setup server ,mail server using HMail server and oyr RDNS / PTr is also solved but when do smtp test from mxtoolbox.com we are getting 220 *** kind of message. our server configuration : OS : windows XP Mail Server : Hmail Server IP : Staic IP Router : Cisco LInksys WRT12N is this some thing related to router or what .. becouse if telnet the smtp in the same PC where hmailerver installed we are getting 220 domain.com as message . which is fine for us but when test fromoutside the rotuer we are getting 220 * how to fix this

    Read the article

  • How to Create image grid view gallery and on click show description by particular image?

    - by Priya jain
    I am Getting stuck on j-query issue i am new to it please help me! I have a image gallery like: Now i want a div to be open when i click on open link and view full description of respective image. My html code is: <ul class="thumb-pic"> <li class="box_small"> <div class="director-detail"> <div align="right"><a href="#" class="open_div">Open</a><a href="#" class="close_div">Close</a></div> <div class="director-name">David MacLeod Dip OHS</div> <div class="director-position"> Director</div> </div> <img src="images/pic1.jpg" alt="pic"> </li> <li class="large_box"> <div align="right"><a href="#" class="open_div">Open</a><a href="#" class="close_div">Close</a></div> <img src="images/pic1.jpg" alt="pic" class="small_img"> <div class="desc"> <div class="director-name">David MacLeod Dip OHS</div> <div class="director-position"> Director</div> <p>All Macil staff are true specialists in their chosen fields and bring a unique set of skills and expertise and a desire to work in the investigation industry. Macil aims to provide a work environment that is empowering, inspiring and motivational</p> </div> </li> <li class="box_small"> <div class="director-detail"> <div align="right"><a href="#" class="open_div">Open</a><a href="#" class="close_div">Close</a></div> <div class="director-name">David MacLeod Dip OHS</div> <div class="director-position"> Director</div> </div> <img src="images/pic2.jpg" alt="pic"> </li> <li class="large_box"> <div align="right"><a href="#" class="open_div">Open</a><a href="#" class="close_div">Close</a></div> <img src="images/pic2.jpg" alt="pic" class="small_img"> <div class="desc"> <div class="director-name">David MacLeod Dip OHS</div> <div class="director-position"> Director</div> <p>All Macil staff are true specialists in their chosen fields and bring a unique set of skills and expertise and a desire to work in the investigation industry. Macil aims to provide a work environment that is empowering, inspiring and motivational</p> </div> </li> <li class="box_small"> <div class="director-detail"> <div align="right"><a href="#" class="open_div">Open</a><a href="#" class="close_div">Close</a></div> <div class="director-name">David MacLeod Dip OHS</div> <div class="director-position"> Director</div> </div> <img src="images/pic3.jpg" alt="pic"> </li> <li class="large_box"> <div align="right"><a href="#" class="open_div">Open</a><a href="#" class="close_div">Close</a></div> <img src="images/pic3.jpg" alt="pic" class="small_img"> <div class="desc"> <div class="director-name">David MacLeod Dip OHS</div> <div class="director-position"> Director</div> <p>All Macil staff are true specialists in their chosen fields and bring a unique set of skills and expertise and a desire to work in the investigation industry. Macil aims to provide a work environment that is empowering, inspiring and motivational</p> </div> </li> </ul> And Jquery code that i am using: <script type="text/javascript"> $(function() { $('#st-accordion').accordion({ oneOpenedItem : true }); }); $(document).ready(function(){ $('.open_div').click(function(){ $('.large_box').show(); $(this).prev('li .box_small').hide(); }); $('.close_div').click(function(){ $('.large_box').hide(); $('.box_small').show(); }); }); </script> I am new to jquery Please help me or give me some direction to achieve the solution.

    Read the article

  • Happy 3rd Birthday SilverlightCream!

    - by Dave Campbell
    Happy 3rd Birthday!     Yesterday (May 16) was the 'Birthday' of SilverlightCream, which started just after MIX in 2007 with a post "Interesting Silverlight posts today: Silverlight Control & Silverlight Pad". Too many good posts flying around led me to want to archive them, particularly since I was being aggregated at a new site Silverlight.net, and I could give some of that 'reach' to the community. Saturday's post was number 862, and as of that post, there were 5697 blog posts archived in the database all tagged up and searchable at SilverlightCream.com using the search page. The search needs to be better, and that's another discussion, but it does work. The blog didn't begin life as the SilverlightCream blog, as is obvious from the name, but once I realized people were following it closely, I've tried to keep the signal-to-noise ratio very high. I even secured another blog for when I just want to rant about something to keep that stuff out of this one :) If you've been around since MIX07 days you've heard all this, but after talking to some people at MIX10 I realized not everyone knows all the ways the information is presented, so I figured doing a post like this once a year probably isn't a bad idea :) I scrounge through an ever-growing list of blogs (right now sitting at 505) looking for good stuff. I try to spin through the list every day, but with the list growing that large, it's getting tough. I usually use it as a background task while working or watching TV. If I just sit and go through the blogs it takes about an hour. The list is long enough now that from time to time, I'll only get partway through it and have 10 to 13 entries, so I'll just stop there and go on the next day... I don't like to have more than 15 in any single post. It's all pattern recognition as in "seen that", "seen that", "that's new", etc... so if you're a blogger, look at a heading below for some comments about blogging from my perspective. When I see something new, I make sure you're not pulling a 'Mike Taulty' on me and dumping 6 or 8 new posts in one day :), and I tag the ones I want to review. If there's not a lot going on, I may just push the posts as I come across them. Some days there may be 60 posts in that 'to review' list! Some are non-Silverlight, some are essentially duplicates of others, some are demos, ads, new releases of something, session materials, etc. I push lots of material into a database at WynApse.com, and the "Tagged Posts" menu on the left sidebar there takes you to a tag cloud of (at this very moment) "9224 articles tagged 13915 different ways using 459 unique tags". There are links in there on Gibson guitars, Jazz Guitar instructional stuff, Ford F-250 links, and tons of technical and non-technical stuff I've been aggregating for about 5 years now. So when I decide to blog (or shoutout) something, I first push it into the database at WynApse.com. Then I tag it all up and push it into the database at SilverlightCream.com. Then it gets pushed to @SilverlightNews. For a little over a year now, we're tracking unique IP hits on posts launched from either the blog post or from one of the SilverlightCream.com pages, and the posts with top hits from unique IP addresses in the last 7 days are displayed in a 'Skim' page at SilverlightCream... and that page needs work as well. The Skim page and tracking was the brainchild of my buddy Michael Washington. What I blog/shoutout After some time doing posts, I decided there were things that probably have no need to be searchable, but are good information, so I post those as 'Shoutouts'. Eventually I also decided the Shoutouts should get posted to @SilverlightNews, and that's now taking place. Notes to bloggers Remember I said spinning throught the Big List-o-BlogsTM is pattern recognition... that means I don't spend a lot of time on any individual blog deciding if it has new content. If you're familiar with the term 'Above the Fold', then you're probably ok. If I have to scroll the page to see if there's something new, or wade through some maze of menus, I'm probably going to miss new stuff. Likewise if you only show the latest on the front page and make it a puzzle to find the rest of them, or if you make the titles and initial graphics almost identical to the previous article, I'll miss it. Another thing is name/brand-recognition. Far be it for me (WynApse) to comment on someone blogging with a pseudonym, but if you want to get get some recognition, you are going to want your name to be available somewhere. I can think right off the top of my head of a couple good blogs that I have no idea of the individuals' real names. I can pull that off a bit because I've been around so long almost everyone knows who I am, but if you're new to the blog-o-sphere, being able to be name-recognized is as important as getting your brand out there. Kick my tires Finally, stuff happens... I may hit the wrong key and delete your blog, or a post might slip past me and I not realize it's new because of the naming, and never blog it. If you think I missed something, send me an email or use the submit page at SilverlightCream.com. Some bloggers have figured out that if they submit (one way or another) to me, their posts will go out next. I try to honor anyone that takes the time to submit with a quicker 'Cream posting. Thanks! Finally, thanks to everyone that contributes to the community as a whole... the blogs, the videos, and the presentations. A special thanks to everyone that reads SilverlightCream, or follows @WynApse or @SilverlightNews. Keep it all coming, and... Stay in the 'Light

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • java.util.zip.ZipException: Error opening file When Deploying an Application to Weblogic Server

    - by lmestre
    The latest weeks we had a hard time trying to solve a deployment issue.* WebLogic Server 10.3.6* Target: WLS Cluster<21-10-2013 05:29:40 PM CLST> <Error> <Console> <BEA-240003> <Console encountered the following error weblogic.management.DeploymentException:        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:69)        at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)        at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)        at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:67)        at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:315)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.compatibilityProcessor(ActivateOperation.java:81)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.setupPrepare(AbstractOperation.java:295)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:97)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)        at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)        at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)Caused by: java.util.zip.ZipException: Error opening file - C:\Oracle\Middleware\user_projects\domains\MyDomain\servers\MyServer\stage\myapp\myapp.war Message - error in opening zip file        at weblogic.servlet.utils.WarUtils.existsInWar(WarUtils.java:87)        at weblogic.servlet.utils.WarUtils.isWebServices(WarUtils.java:76)        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:61) So the first idea you have with that error is that the war file is corrupted or has incorrect privileges.        We tried:1. Unzipping the  war file, the file was perfect.2. Checking the size, same size as in other environments.3. Checking the ownership of the file, same as in other environments.4. Checking the permissions of the file, same as other applications.Then we accepted the file was fine, so we tried enabling some deployment debugs, but no clues.We also tried:1. Delete all contents of <MyDomain/servers/<MyServer>/tmp> a and <MyDomain/servers/<MyServer>/cache> folders, the issue persisted.2. When renaming the application the deployment was sucessful3. When targeting to the Admin Server, deployment was also working.4. Using 'Copy this application onto every target for me' didn't help either.Finally, my friend 'Test Case' solved the issue again.I saw this name in the config.xml<jdbc-system-resource>    <name>myapp</name>    <target></target>    <descriptor-file-name>jdbc/myapp-jdbc.xml</descriptor-file-name>  </jdbc-system-resource> So, it turned out that customer had created a DataSource with the same name as the application 'myapp' in the above example.By deleting the datasource and created another exact DataSource with a different name the issue was solved.At this point, Do you know Why 'java.util.zip.ZipException: Error opening file' was occurring?Because all names is WebLogic Server need to be unique.References: http://docs.oracle.com/cd/E23943_01/web.1111/e13709/setup.htm"Assigning Names to WebLogic Server ResourcesMake sure that each configurable resource in your WebLogic Server environment has a unique name. Each, domain, server, machine, cluster, JDBC data source, virtual host, or other resource must have a unique name." Enjoy!

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • e-interview: SunSpace to WebCenter migration

    - by me
    I had the pleasure to do an e-interview with Ana Neves around the SunSpace to WebCenter migration project.  Below is the english version of the interview.  Enjoy   Peter, you joined Oracle in 2009 through the acquisition of Sun. Becoming a part of Oracle meant many changes. The internal collaboration platform was one of them, as per a post you wrote back in 2011. Sun had SunSpace. How would you describe SunSpace? SunSpace was the internal Community and Social Collaboration platform for the Sun's Global Sales and Services Organization. SunSpace served around 600 communities with a main focus around technology, products and services. SunSpace was a big success. Within 3 months of its launch SunSpace had over 20,000 users and it won the Atlassian "Not just another wiki" Award for the best use of Confluence (https://blogs.oracle.com/peterreiser/entry/goodbye_sunspace_hello_webcenter). What made SunSpace so special? 1. People centric versus  Web centric The main concept of SunSpace put the person in the middle of everything. All relevant information, resources  etc. where dynamically pushed to a person's  myProfile ( Facebook like interface) based on the person's interest and  needs.  2. Ease to use  SunSpace was really easy to use. We spent a lot of time on social interaction design to optimize the user experience.  Also we integrated some sophisticated technology to hide complexity from the user. As example - when a user added a document to SunSpace - we analyzed the content of the document and suggested related metadata and tags to the user based on a sophisticated algorithm which was integrated with the corporate taxonomy. Based on this metadata the document was automatically shared with the relevant communities.  3. Easy to find One of the main use cases for SunSpace was that  a user could quickly find the content and information they needed for their job.  The search implementation was based on:  optimized search engine algorithm using social value based ranking enhancements community facilitated search optimization  faceted search which recommended highly relevant  content like products, communities and experts 4. Social Adoption  - How to build vibrant communities You can deploy the coolest social technology but what if the users are not using it?   To drive user adoption we implemented two  complementary models: 4.1 Community Methodology  We developed a set of best practices on how to create, run and sustain communities including: community structure and types (e.g. Community of Practice, Community of Interest etc.) & tips and tricks on how to build a "vibrant " communities, Community Health check etc.  These best practices where constantly tuned and updated by the community of community drivers. 4.2. Social Value System To drive user adoption there is ONE key  question you  have to answer for each individual user: What's In It For Me (WIIFM) We developed a Social Value System called Community Equity which measures the social value flow between People, Content and Metadata. Based on this technology we added "Gamfication" techniques (although at that time this term did not exist ) to SunSpace to honor people for the active contribution and participation.  As example: All  social credentials a user earned trough active community participation where dynamically displayed on her/his myProfile. How would you describe WebCenter? Oracle WebCenter (@oraclewebcenter) is the Oracle's  user engagement platform for social business. It helps people work together more efficiently through contextual collaboration tools that optimize connections between people, information, and applications and ensures users have access to the right information in the context of the business process in which they are engaged. Oracle WebCenter can help your organization deliver contextual and targeted Web experiences to users and enable employees to access information and applications through intuitive portals, composite applications, and mash-ups. How does it compare to SunSpace in terms of functionality? Before I answer this question, I would like to point out some limitation we started to see with the current SunSpace implementation. Due to the massive growth of the user population (>20,000 users), we experienced  performance and scalability challenges with the current technology. Also at the time - Sun Internal Communications and SunIT planned to replace the entire Sun Intranet with SunSpace. We  kicked-off a project to evaluate the enterprise level technology which eventually would replace the good old static Intranet.  And then Oracle acquired Sun. We already had defined the functional requirements for the Intranet replacement with a Social Enterprise Stack and we just needed to evaluate the functional requirements against WebCenter   Below are the summary of this evaluation  MyProfile SunSpace WebCenter How WebCenter Works Home MyProfile: to access, click on your name at the top of any WebCenter page Your name, title, and reporting line are displayed.  Sub-tabs show your activity stream (Activities); people in your network (Connections); files you have uploaded (Documents); your contact information (Organization); and any personal information you wish to share (About).   Files MyFiles Allows you to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows you to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Network HomeMyConnections Home: displays the activity stream of individuals in your network.MyConnections: shows individuals in your network.  Click on a person's name to see their contact info and link to their profile. Status Updates MyProfle > Activties Add and displays  your recent activties and status updates. Watches Preferences > Subscriptions > Current Subscriptions Receive email notifications when  pages / spaces you watch are modified. Drafts N/A WebCenter does not support Drafts Settings Preferences: to access, click on 'Preferences' at the top of any WebCenter page Set your general preferences, as well as your WebCenter messaging, search and mail settings. MyCommunities MySpaces: to access, click on 'Spaces' at the top of any WebCenter page Displays MySpaces (communities you are a member of); and Recent Spaces (communities you have recently visited). Community SunSpace Webcenter How Webcenter Works Home Home Displays a community introduction and activity stream.  Members can add messages, links or documents via the Community Message Board. No Top Contributors widget. People Members Lists members of the community. The Mail All Members feature allows moderators and participants to send a message to all members of the community. Membership Management can be found under > Manage > Members News News Members can post and access latest community news and they can subscribe to news using an RSS reader Documents Documents Allows community members to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows participants to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Wiki Wiki Allows community members to create and update web pages with a WYSIWYG editor.  Note: WebCenter does not support macros or portlet embedding. Forum Forum Post community forum topics. Contribute to community forum conversations.  N/A Calendar Update and/or view the Community Calendar. N/A Analytics Displays detailed analytics data (views,downloads, unique users etc.) for Pages, Wiki, Documents, and Forum in a given community space. What is the adoption of WebCenter at Oracle? The entire Intranet serving around 100,000 users  is running on WebCenter Content.  For professional communities we use WebCenter Portal and Spaces. Currently we have around 6,000 community spaces with  around 40,000 members.  Does Oracle have any metrics to assess usage and impact of WebCenter? Can you give us some examples? Sure -  we have a lot of metrics   For the Intranet we use traditional metrics like pageviews, monthly unique visitors and unique visits.  For Communities we use the WebCenter Portal/Spaces analytics service which gives as a wealth of data. The key metrics we track are: Space traffic (PageViews, Unique Users) Wiki,Documents (views, downloads etc.) Forum (users, views, posts etc.) Registered members over time  Depending on the community we can filter/segment the metrics by User Properties e.g. Country, Organization, Job Role etc. What are you doing to improve usage and impact? 1. We  integrating the WebCenter social services/fabric into all  main business applications. As example The Fusion CRM deployment is seamless integrated with Oracle Social Network (OSN) and all conversation around an opportunity or customer engagement is  done in OSN (see youtube video). 2. We drive Social Best Practice trough a program called "Social Networking & Business Collaboration (SNBC) program" You worked both with WebCenter and SunSpace. Knowing what you know today, if you had the chance to choose between the two, which one would you choose? Why? That's a tricky question   In the early days of  the Social Enterprise implementation (we started SunSpace in 2006), we needed an agile and easy to deploy technology to keep up with the users requirements. Sometimes we pushed two releases per day  and we were in a permanent perpetual beta mode - SunSpace was perfect for that.  After the social implementation matured over time - community generated content became business critical and we saw a change in the  requirements from agile to stability, scalability and reliability  of the infrastructure.  WebCenter is the right choice for such an enterprise-level deployment.  You are a WebCenter Evangelist at Oracle. What do you do as part of that role? Our  role is to help position Oracle as one of the key thought leaders and solutions provider for Social Business. In addition we drive social innovation trough our Oracle Appslab  team. Is that a full time role? Yes  How many other Evangelists are there in Oracle? We are currently 5 people in the WebCenter evangelist team (@webcentervoices): Christian Finn (@cfinn) leads the team - Christian came from the Microsoft Sharepoint product management team and is a recognized expert in Social Business and Enterprise Collaboration. Noël Jaffré  (@noeljaffre) is our Web Experience Management (WEM) guru and came to Oracle via FatWire acquisition (now WebCenter Sites). Jake Kuramoto (@theapplab) is part of the Oracle AppsLab innovation  team - Jake is well known as  the driving force behind  http://theappslab.com  a blog around social and innovation.  Noel Portugal (@noelportugal) is a developer in the Oracle AppsLab innovation team - he is the inventor of OraTweet - Oracle's internal tweeting platform  Peter Reiser (@peterreiser) is  a Social Business guru and the inventor of SunSpace and Community Equity.  What area of the business do you and the rest of the Evangelists sit in? What area of the organisation is responsible for WebCenter? We are part of the WebCenter product management  organization.  Is WebCenter part of the Knowledge Management strategy? Oracle WebCenter is the Oracle's user engagement platform for social business. It brings together the most complete portfolio of portal, web experience management, content, social and collaboration technologies into a single product suite and is the product foundation of the Oracle Knowledge Management strategy.  I am aware Oracle also uses Beehive internally. How would you describe Beehive? Oracle Beehive provides an integrated set of communication and collaboration services built on a single scalable, secure, enterprise-class platform Beehive is  internally used for enterprise wide mail, calendar and real collaboration (Web conferencing) services.  Are Beehive and WebCenter connected? Historically Beehive and WebCenter Portal & Content had some overlap in functionally. (Hey - if  a company has an acquisition strategy to strengthen its product offering and accelerate  innovation, it's pretty normal that functional overlap exists  :- )) A key objective of the WebCenter strategy is  to combine all social and collaboration offerings under the WebCenter product family. That means that certain Beehive components  will be integrated into the overall WebCenter product offering.  Are there any other internal collaboration tools at Oracle? Which ones There here are two other main social tools which are widely used at Oracle  Oracle Connect was the first social tool the Oracle AppsLab team created in 2007 - see (Jake's blog post for details). It is still extensively used. ... and as a former Sun guy I like this quote from the blog post:  "Traffic to Connect peaked right after the Sun merger in 2010, when it served several hundred thousand pageviews each month; since then, traffic has subsided, but still averages tens of thousands of pageviews to several thousand users each month." Oratweet - Oracle internal microblogging platform has been used since June 2008 and it is still growing.  It's entirely written in Oracle Application Express (APEX) which is a rapid web application development tool for the Oracle database. Wanna try it out? Here you can download the code.  What is Oracle's strategy regarding (all these) collaboration tools? Pretty straight forward. The strategy is to seamless  integrate the WebCenter social & collaboration services into all Business Applications to help customers to socialize their enterprise. 

    Read the article

  • Immutable Method in Java

    - by Chris Okyen
    In Java, there is the final keyword in lieu of the const keyword in C and C++. In the latter languages there are mutable and immutable methods such as stated in the answer by Johannes Schaub - litb to the question How many and which are the uses of “const” in C++? Use const to tell others methods won't change the logical state of this object. struct SmartPtr { int getCopies() const { return mCopiesMade; } }ptr1; ... int var = ptr.getCopies(); // returns mCopiesMade and is specified that to not modify objects state. How is this performed in Java?

    Read the article

  • Immutable Method Java

    - by Chris Okyen
    In Java, there is the final keyword in lieu of the const keyword in c and c++. In the latter languages their is mutable and immutable methods such as stated in one answer by Johannes Schaub - litb the question how-many-and-which-are-the-uses-of-const-in-ce Use const to tell others methods won't change the logical state of this object. struct SmartPtr { int getCopies() const { return mCopiesMade; } }ptr1; ... int var = ptr.getCopies(); // returns mCopiesMade and is specified that to not modify objects state. How is this performed in Java?

    Read the article

  • Same SELECT used in an INSERT has different execution plan

    - by amacias
    A customer complained that a query and its INSERT counterpart had different execution plans, and of course, the INSERT was slower. First lets look at the SELECT : SELECT ua_tr_rundatetime,        ua_ch_treatmentcode,        ua_tr_treatmentcode,        ua_ch_cellid,        ua_tr_cellid FROM   (SELECT DISTINCT CH.treatmentcode AS UA_CH_TREATMENTCODE,                         CH.cellid        AS UA_CH_CELLID         FROM    CH,                 DL         WHERE  CH.contactdatetime > SYSDATE - 5                AND CH.treatmentcode = DL.treatmentcode) CH_CELLS,        (SELECT DISTINCT T.treatmentcode AS UA_TR_TREATMENTCODE,                         T.cellid        AS UA_TR_CELLID,                         T.rundatetime   AS UA_TR_RUNDATETIME         FROM    T,                 DL         WHERE  T.treatmentcode = DL.treatmentcode) TRT_CELLS WHERE  CH_CELLS.ua_ch_treatmentcode(+) = TRT_CELLS.ua_tr_treatmentcode;  The query has 2 DISTINCT subqueries.  The execution plan shows one with DISTICT Placement transformation applied and not the other. The view in Step 5 has the prefix VW_DTP which means DISTINCT Placement. -------------------------------------------------------------------- | Id  | Operation                    | Name            | Cost (%CPU) -------------------------------------------------------------------- |   0 | SELECT STATEMENT             |                 |   272K(100) |*  1 |  HASH JOIN OUTER             |                 |   272K  (1) |   2 |   VIEW                       |                 |  4408   (1) |   3 |    HASH UNIQUE               |                 |  4408   (1) |*  4 |     HASH JOIN                |                 |  4407   (1) |   5 |      VIEW                    | VW_DTP_48BAF62C |  1660   (2) |   6 |       HASH UNIQUE            |                 |  1660   (2) |   7 |        TABLE ACCESS FULL     | DL              |  1644   (1) |   8 |      TABLE ACCESS FULL       | T               |  2744   (1) |   9 |   VIEW                       |                 |   267K  (1) |  10 |    HASH UNIQUE               |                 |   267K  (1) |* 11 |     HASH JOIN                |                 |   267K  (1) |  12 |      PARTITION RANGE ITERATOR|                 |   266K  (1) |* 13 |       TABLE ACCESS FULL      | CH              |   266K  (1) |  14 |      TABLE ACCESS FULL       | DL              |  1644   (1) -------------------------------------------------------------------- Query Block Name / Object Alias (identified by operation id): -------------------------------------------------------------    1 - SEL$1    2 - SEL$AF418D5F / TRT_CELLS@SEL$1    3 - SEL$AF418D5F    5 - SEL$F6AECEDE / VW_DTP_48BAF62C@SEL$48BAF62C    6 - SEL$F6AECEDE    7 - SEL$F6AECEDE / DL@SEL$3    8 - SEL$AF418D5F / T@SEL$3    9 - SEL$2        / CH_CELLS@SEL$1   10 - SEL$2   13 - SEL$2        / CH@SEL$2   14 - SEL$2        / DL@SEL$2 Predicate Information (identified by operation id): ---------------------------------------------------    1 - access("CH_CELLS"."UA_CH_TREATMENTCODE"="TRT_CELLS"."UA_TR_TREATMENTCODE")    4 - access("T"."TREATMENTCODE"="ITEM_1")   11 - access("CH"."TREATMENTCODE"="DL"."TREATMENTCODE")   13 - filter("CH"."CONTACTDATETIME">SYSDATE@!-5) The outline shows PLACE_DISTINCT(@"SEL$3" "DL"@"SEL$3") indicating that the QB3 is the one that got the transformation. Outline Data -------------   /*+       BEGIN_OUTLINE_DATA       IGNORE_OPTIM_EMBEDDED_HINTS       OPTIMIZER_FEATURES_ENABLE('11.2.0.3')       DB_VERSION('11.2.0.3')       ALL_ROWS       OUTLINE_LEAF(@"SEL$2")       OUTLINE_LEAF(@"SEL$F6AECEDE")       OUTLINE_LEAF(@"SEL$AF418D5F") PLACE_DISTINCT(@"SEL$3" "DL"@"SEL$3")       OUTLINE_LEAF(@"SEL$1")       OUTLINE(@"SEL$48BAF62C")       OUTLINE(@"SEL$3")       NO_ACCESS(@"SEL$1" "TRT_CELLS"@"SEL$1")       NO_ACCESS(@"SEL$1" "CH_CELLS"@"SEL$1")       LEADING(@"SEL$1" "TRT_CELLS"@"SEL$1" "CH_CELLS"@"SEL$1")       USE_HASH(@"SEL$1" "CH_CELLS"@"SEL$1")       FULL(@"SEL$2" "CH"@"SEL$2")       FULL(@"SEL$2" "DL"@"SEL$2")       LEADING(@"SEL$2" "CH"@"SEL$2" "DL"@"SEL$2")       USE_HASH(@"SEL$2" "DL"@"SEL$2")       USE_HASH_AGGREGATION(@"SEL$2")       NO_ACCESS(@"SEL$AF418D5F" "VW_DTP_48BAF62C"@"SEL$48BAF62C")       FULL(@"SEL$AF418D5F" "T"@"SEL$3")       LEADING(@"SEL$AF418D5F" "VW_DTP_48BAF62C"@"SEL$48BAF62C" "T"@"SEL$3")       USE_HASH(@"SEL$AF418D5F" "T"@"SEL$3")       USE_HASH_AGGREGATION(@"SEL$AF418D5F")       FULL(@"SEL$F6AECEDE" "DL"@"SEL$3")       USE_HASH_AGGREGATION(@"SEL$F6AECEDE")       END_OUTLINE_DATA   */ The 10053 shows there is a comparative of cost with and without the transformation. This means the transformation belongs to Cost-Based Query Transformations (CBQT). In SEL$3 the optimization of the query block without the transformation is 6659.73 and with the transformation is 4408.41 so the transformation is kept. GBP/DP: Checking validity of GBP/DP for query block SEL$3 (#3) DP: Checking validity of distinct placement for query block SEL$3 (#3) DP: Using search type: linear DP: Considering distinct placement on query block SEL$3 (#3) DP: Starting iteration 1, state space = (5) : (0) DP: Original query DP: Costing query block. DP: Updated best state, Cost = 6659.73 DP: Starting iteration 2, state space = (5) : (1) DP: Using DP transformation in this iteration. DP: Transformed query DP: Costing query block. DP: Updated best state, Cost = 4408.41 DP: Doing DP on the original QB. DP: Doing DP on the preserved QB. In SEL$2 the cost without the transformation is less than with it so it is not kept. GBP/DP: Checking validity of GBP/DP for query block SEL$2 (#2) DP: Checking validity of distinct placement for query block SEL$2 (#2) DP: Using search type: linear DP: Considering distinct placement on query block SEL$2 (#2) DP: Starting iteration 1, state space = (3) : (0) DP: Original query DP: Costing query block. DP: Updated best state, Cost = 267936.93 DP: Starting iteration 2, state space = (3) : (1) DP: Using DP transformation in this iteration. DP: Transformed query DP: Costing query block. DP: Not update best state, Cost = 267951.66 To the same query an INSERT INTO is added and the result is a very different execution plan. INSERT  INTO cc               (ua_tr_rundatetime,                ua_ch_treatmentcode,                ua_tr_treatmentcode,                ua_ch_cellid,                ua_tr_cellid)SELECT ua_tr_rundatetime,       ua_ch_treatmentcode,       ua_tr_treatmentcode,       ua_ch_cellid,       ua_tr_cellidFROM   (SELECT DISTINCT CH.treatmentcode AS UA_CH_TREATMENTCODE,                        CH.cellid        AS UA_CH_CELLID        FROM    CH,                DL        WHERE  CH.contactdatetime > SYSDATE - 5               AND CH.treatmentcode = DL.treatmentcode) CH_CELLS,       (SELECT DISTINCT T.treatmentcode AS UA_TR_TREATMENTCODE,                        T.cellid        AS UA_TR_CELLID,                        T.rundatetime   AS UA_TR_RUNDATETIME        FROM    T,                DL        WHERE  T.treatmentcode = DL.treatmentcode) TRT_CELLSWHERE  CH_CELLS.ua_ch_treatmentcode(+) = TRT_CELLS.ua_tr_treatmentcode;----------------------------------------------------------| Id  | Operation                     | Name | Cost (%CPU)----------------------------------------------------------|   0 | INSERT STATEMENT              |      |   274K(100)|   1 |  LOAD TABLE CONVENTIONAL      |      |            |*  2 |   HASH JOIN OUTER             |      |   274K  (1)|   3 |    VIEW                       |      |  6660   (1)|   4 |     SORT UNIQUE               |      |  6660   (1)|*  5 |      HASH JOIN                |      |  6659   (1)|   6 |       TABLE ACCESS FULL       | DL   |  1644   (1)|   7 |       TABLE ACCESS FULL       | T    |  2744   (1)|   8 |    VIEW                       |      |   267K  (1)|   9 |     SORT UNIQUE               |      |   267K  (1)|* 10 |      HASH JOIN                |      |   267K  (1)|  11 |       PARTITION RANGE ITERATOR|      |   266K  (1)|* 12 |        TABLE ACCESS FULL      | CH   |   266K  (1)|  13 |       TABLE ACCESS FULL       | DL   |  1644   (1)----------------------------------------------------------Query Block Name / Object Alias (identified by operation id):-------------------------------------------------------------   1 - SEL$1   3 - SEL$3 / TRT_CELLS@SEL$1   4 - SEL$3   6 - SEL$3 / DL@SEL$3   7 - SEL$3 / T@SEL$3   8 - SEL$2 / CH_CELLS@SEL$1   9 - SEL$2  12 - SEL$2 / CH@SEL$2  13 - SEL$2 / DL@SEL$2Predicate Information (identified by operation id):---------------------------------------------------   2 - access("CH_CELLS"."UA_CH_TREATMENTCODE"="TRT_CELLS"."UA_TR_TREATMENTCODE")   5 - access("T"."TREATMENTCODE"="DL"."TREATMENTCODE")  10 - access("CH"."TREATMENTCODE"="DL"."TREATMENTCODE")  12 - filter("CH"."CONTACTDATETIME">SYSDATE@!-5)Outline Data-------------  /*+      BEGIN_OUTLINE_DATA      IGNORE_OPTIM_EMBEDDED_HINTS      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')      DB_VERSION('11.2.0.3')      ALL_ROWS      OUTLINE_LEAF(@"SEL$2")      OUTLINE_LEAF(@"SEL$3")      OUTLINE_LEAF(@"SEL$1")      OUTLINE_LEAF(@"INS$1")      FULL(@"INS$1" "CC"@"INS$1")      NO_ACCESS(@"SEL$1" "TRT_CELLS"@"SEL$1")      NO_ACCESS(@"SEL$1" "CH_CELLS"@"SEL$1")      LEADING(@"SEL$1" "TRT_CELLS"@"SEL$1" "CH_CELLS"@"SEL$1")      USE_HASH(@"SEL$1" "CH_CELLS"@"SEL$1")      FULL(@"SEL$2" "CH"@"SEL$2")      FULL(@"SEL$2" "DL"@"SEL$2")      LEADING(@"SEL$2" "CH"@"SEL$2" "DL"@"SEL$2")      USE_HASH(@"SEL$2" "DL"@"SEL$2")      USE_HASH_AGGREGATION(@"SEL$2")      FULL(@"SEL$3" "DL"@"SEL$3")      FULL(@"SEL$3" "T"@"SEL$3")      LEADING(@"SEL$3" "DL"@"SEL$3" "T"@"SEL$3")      USE_HASH(@"SEL$3" "T"@"SEL$3")      USE_HASH_AGGREGATION(@"SEL$3")      END_OUTLINE_DATA  */ There is no DISTINCT Placement view and no hint.The 10053 trace shows a new legend "DP: Bypassed: Not SELECT"implying that this is a transformation that it is possible only for SELECTs. GBP/DP: Checking validity of GBP/DP for query block SEL$3 (#4) DP: Checking validity of distinct placement for query block SEL$3 (#4) DP: Bypassed: Not SELECT. GBP/DP: Checking validity of GBP/DP for query block SEL$2 (#3) DP: Checking validity of distinct placement for query block SEL$2 (#3) DP: Bypassed: Not SELECT. In 12.1 (and hopefully in 11.2.0.4 when released) the restriction on applying CBQT to some DMLs and DDLs (like CTAS) is lifted.This is documented in BugTag Note:10013899.8 Allow CBQT for some DML / DDLAnd interestingly enough, it is possible to have a one-off patch in 11.2.0.3. SQL> select DESCRIPTION,OPTIMIZER_FEATURE_ENABLE,IS_DEFAULT     2  from v$system_fix_control where BUGNO='10013899'; DESCRIPTION ---------------------------------------------------------------- OPTIMIZER_FEATURE_ENABLE  IS_DEFAULT ------------------------- ---------- enable some transformations for DDL and DML statements 11.2.0.4                           1

    Read the article

  • is of a type that is invalid for use as a key column in an index.

    - by acidzombie24
    I have an error at Column 'key' in table 'misc_info' is of a type that is invalid for use as a key column in an index. where key is a nvarchar(max). A quick google found this. It however doesnt explain what a solution is. How do i create something like Dictionary where the key and value are both strings and obviously the key must be unique and is single. My sql statement was create table [misc_info] ( [id] INTEGER PRIMARY KEY IDENTITY NOT NULL, [key] nvarchar(max) UNIQUE NOT NULL, [value] nvarchar(max) NOT NULL);

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >