Search Results

Search found 9299 results on 372 pages for 'policy and procedure manual'.

Page 74/372 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Best method to implement a filtered search

    - by j0N45
    I would like to ask you, your opinion when it comes to implement a filtered search form. Let's imagine the following case: 1 Big table with lots of columns It might be important to say that this SQL Server You need to implement a form to search data in this table, and in this form you'll have several check boxes that allow you to costumize this search. Now my question here is which one of the following should be the best way to implement the search? Create a stored procedure with a query inside. This stored procedure will check if the parameters are given by the application and in the case they are not given a wildcard will be putted in the query. Create a dynamic query, that is built accordingly to what is given by the application. I am asking this because I know that SQL Server creates an execution plan when the stored procedure is created, in order to optimize its performance, however by creating a dynamic query inside of the stored procedure will we sacrifice the optimization gained by the execution plan? Please tell me what would be the best approach in your oppinion.

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • How can I forward ALL traffic over a site-to-site VPN on Cisco ASA?

    - by Scott Clements
    Hi There, I currently have two Cisco ASA 5100 routers. They are at different physical sites and are configured with a site-to-site VPN which is active and working. I can communicate with the subnets on either site from the other and both are connected to the internet, however I need to ensure that all the traffic at my remote site goes through this VPN to my site here. I know that the web traffic is doing so as a "tracert" confirms this, but I need to ensure that all other network traffic is being directed over this VPN to my network here. Here is my config for the ASA router at my remote site: hostname ciscoasa domain-name xxxxx enable password 78rl4MkMED8xiJ3g encrypted names ! interface Ethernet0/0 nameif NIACEDC security-level 100 ip address x.x.x.x 255.255.255.0 ! interface Ethernet0/1 description External Janet Connection nameif JANET security-level 0 ip address x.x.x.x 255.255.255.248 ! interface Ethernet0/2 shutdown no nameif security-level 100 no ip address ! interface Ethernet0/3 shutdown no nameif security-level 100 ip address dhcp setroute ! interface Management0/0 nameif management security-level 100 ip address 192.168.100.1 255.255.255.0 management-only ! passwd 2KFQnbNIdI.2KYOU encrypted ftp mode passive clock timezone GMT/BST 0 clock summer-time GMT/BDT recurring last Sun Mar 1:00 last Sun Oct 2:00 dns domain-lookup NIACEDC dns server-group DefaultDNS name-server 154.32.105.18 name-server 154.32.107.18 domain-name XXXX same-security-traffic permit inter-interface same-security-traffic permit intra-interface access-list ren_access_in extended permit ip any any access-list ren_access_in extended permit tcp any any access-list ren_nat0_outbound extended permit ip 192.168.6.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_nat0_outbound extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list JANET_20_cryptomap extended permit ip 192.168.12.0 255.255.255.0 192.168.3.0 255.255.255.0 access-list NIACEDC_access_in extended permit ip any any access-list NIACEDC_access_in extended permit tcp any any access-list JANET_access_out extended permit ip any any access-list NIACEDC_access_out extended permit ip any any pager lines 24 logging enable logging asdm informational mtu NIACEDC 1500 mtu JANET 1500 mtu management 1500 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-522.bin no asdm history enable arp timeout 14400 nat-control global (NIACEDC) 1 interface global (JANET) 1 interface nat (NIACEDC) 0 access-list NIACEDC_nat0_outbound nat (NIACEDC) 1 192.168.12.0 255.255.255.0 access-group NIACEDC_access_in in interface NIACEDC access-group NIACEDC_access_out out interface NIACEDC access-group JANET_access_out out interface JANET route JANET 0.0.0.0 0.0.0.0 194.82.121.82 1 route JANET 0.0.0.0 0.0.0.0 192.168.3.248 tunneled timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute http server enable http 192.168.12.0 255.255.255.0 NIACEDC http 192.168.100.0 255.255.255.0 management http 192.168.9.0 255.255.255.0 NIACEDC no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto map JANET_map 20 match address JANET_20_cryptomap crypto map JANET_map 20 set pfs crypto map JANET_map 20 set peer X.X.X.X crypto map JANET_map 20 set transform-set ESP-AES-256-SHA crypto map JANET_map interface JANET crypto isakmp enable JANET crypto isakmp policy 10 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 crypto isakmp policy 30 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 50 authentication pre-share encryption aes-256 hash sha group 5 lifetime 86400 tunnel-group X.X.X.X type ipsec-l2l tunnel-group X.X.X.X ipsec-attributes pre-shared-key * telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd address 192.168.100.2-192.168.100.254 management dhcpd enable management ! ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect http ! service-policy global_policy global prompt hostname context no asdm history enable Thanks in advance, Scott

    Read the article

  • http request via iptables --to-destination ip redirect results in no response

    - by Wouter Vegter
    I have two Ubuntu servers with each having their own ip addresses. Let's call them server1 and server2, having respectively ip 1.1.1.1 and 2.2.2.2 I have a nginx running on server2. The sole purpose I want server1 to have is to redirect all incoming http (so port 80) requests to server2 without clients noticing that their request is being redirected. I tried the following command on server1: iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2 But when I enter 1.1.1.1 in my browser I get no respond: the page keeps trying to load without giving any message or error message (I get a time-out after 2-3 mins). But when I do remove the above iptables rule I immediately do get a "page not found error" when I enter 1.1.1.1 in my browser; so something is working but not as it should: when I enter 1.1.1.1 I want the html page to load that is hosted on 2.2.2.2 Because when i enter 2.2.2.2 in my browser I do see the webpage loaded. Could anyone please help me with this? I am searching quite some time (on severfault & Google) on this now so that's why I ask. Many thanks for reading my question! Update: Thank you all for you information. Unfortunately I still get no response I have the following iptables configuration: root@ip-10-48-238-216:/home/ubuntu# sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination root@ip-10-48-238-216:/home/ubuntu# sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp dpt:www to:2.2.2.2 Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination When i run tcpdump and do request via chrome to 1.1.1.1 i get the following root@ip-10-48-238-216:/home/ubuntu# sudo tcpdump -i eth0 port 80 -vv tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 13:56:18.346625 IP (tos 0x0, ttl 52, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xb398 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.346662 IP (tos 0x0, ttl 51, id 12055, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16386 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9ee0 (correct), seq 2639758575, win 5840, options [mss 1460,sackOK,TS val 1223672 ecr 0,nop,wscale 6], length 0 13:56:18.598747 IP (tos 0x0, ttl 52, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ip-10-48-238-216.eu-west-1.compute.internal.www: Flags [S], cksum 0xac40 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 13:56:18.598777 IP (tos 0x0, ttl 51, id 10138, offset 0, flags [DF], proto TCP (6), length 60) 212-123-161-112.ip.telfort.nl.16387 ww1dc1.shopreme.com.www: Flags [S], cksum 0x9788 (correct), seq 2645658541, win 5840, options [mss 1460,sackOK,TS val 1223735 ecr 0,nop,wscale 6], length 0 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel the mentioned address relate to the following 212-123-161-112.ip.telfort.nl.16386 : my personal computer ww1dc1.shopreme.com.www : dns of server2 (2.2.2.2) ip-10-48-238-216.eu-west-1.compute.internal.www : amazon web services ec2 internal address of server1 (1.1.1.1) However, the tcpdump log on server2 (2.2.2.2) stays empty and I get no response back in my browser. I am able to ping from server1 to server2. And net.ipv4.ip_forward is set to 1 and so is /proc/sys/net/ipv4/ip_forward Could there be anything else that is missing?

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • got VPN l2l connect between a site & HQ but not traffice using ASA5505 on both ends

    - by vinlata
    Hi, Could anyone see what did I do wrong here? this is one configuration of site1 to HQ on ASA5505, I can get connected but seems like no traffic going (allowed) between them, could it be a NAT issue? any helps would much be appreciated Thanks interface Vlan1 nameif inside security-level 100 ip address 172.30.205.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address pppoe setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 shutdown ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 shutdown ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! passwd .dIuXDIYzD6RSHz7 encrypted ftp mode passive dns server-group DefaultDNS domain-name errg.net object-group network HQ network-object 172.22.0.0 255.255.0.0 network-object 172.22.0.0 255.255.128.0 network-object 172.22.0.0 255.255.255.128 network-object 172.22.1.0 255.255.255.128 network-object 172.22.1.0 255.255.255.0 access-list inside_access_in extended permit ip any any access-list outside_access_in extended permit icmp any any echo-reply access-list outside_20_cryptomap extended permit ip 172.30.205.0 255.255.255.0 o bject-group HQ access-list inside_nat0_outbound extended permit ip 172.30.205.0 255.255.255.0 o bject-group HQ access-list policy-nat extended permit ip 172.30.205.0 255.255.255.0 172.22.0.0 255.255.0.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 nat-control global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) 172.30.205.0 access-list policy-nat access-group inside_access_in in interface inside access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout uauth 0:05:00 absolute username errgadmin password Os98gTdF8BZ0X2Px encrypted privilege 15 http server enable http 64.42.2.224 255.255.255.240 outside http 172.22.0.0 255.255.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto map outside_map 190 match address outside_20_cryptomap crypto map outside_map 190 set pfs crypto map outside_map 190 set peer 66.7.249.109 crypto map outside_map 190 set transform-set ESP-3DES-SHA crypto map outside_map 190 set phase1-mode aggressive crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 30 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp nat-traversal 190 crypto isakmp ipsec-over-tcp port 10000 tunnel-group 66.7.249.109 type ipsec-l2l tunnel-group 66.7.249.109 ipsec-attributes pre-shared-key * telnet timeout 5 ssh 172.30.205.0 255.255.255.0 inside ssh 172.22.0.0 255.255.0.0 outside ssh 64.42.2.224 255.255.255.240 outside ssh 172.25.0.0 255.255.128.0 outside ssh timeout 5 console timeout 0 management-access inside vpdn group PPPoEx request dialout pppoe vpdn group PPPoEx localname [email protected] vpdn group PPPoEx ppp authentication pap vpdn username [email protected] password ********* dhcpd address 172.30.205.100-172.30.205.131 inside dhcpd dns 172.22.0.133 68.94.156.1 interface inside dhcpd wins 172.22.0.133 interface inside dhcpd domain errg.net interface inside dhcpd enable inside ! ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! end

    Read the article

  • Can not open port 3306 on Ubuntu using iptables

    - by user94626
    I am trying to open port 3306 (for remote mysql connections) on my ubuntu 12.04 server machine but for the life of me can't get the damned thing to work! Here is what I did: 1) list current firewall rules: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 225 16984 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 220 69605 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 486 54824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 4 208 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 4 208 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 735 182K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 225 16984 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 2) try to connect from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect 3) try to add a new rule to iptables: iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT 4) make sure the new rule is added: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 359 25972 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 251 78665 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 628 64420 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 5 260 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 5 260 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 919 213K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 359 25972 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 which appears to be the case (last line in "Chain INPUT" section). 5) try to connect again from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect which is failing again. 6) try to flush all rules: $> sudo iptables -F 7) this time I CAN CONNECT. 8) reboot server and try to connect, FAILURE. I suspect since the new rule is being appended at the end it will have no effect as there appears to be a "reject all" sort of rule before it. If this is the case, how to make sure the new rule is added in the right order? Otherwise, what am I missing? Please help.

    Read the article

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • SQL Server Split() Function

    - by HighAltitudeCoder
    Title goes here   Ever wanted a dbo.Split() function, but not had the time to debug it completely?  Let me guess - you are probably working on a stored procedure with 50 or more parameters; two or three of them are parameters of differing types, while the other 47 or so all of the same type (id1, id2, id3, id4, id5...).  Worse, you've found several other similar stored procedures with the ONLY DIFFERENCE being the number of like parameters taped to the end of the parameter list. If this is the situation you find yourself in now, you may be wondering, "why am I working with three different copies of what is basically the same stored procedure, and why am I having to maintain changes in three different places?  Can't I have one stored procedure that accomplishes the job of all three? My answer to you: YES!  Here is the Split() function I've created.    /******************************************************************************                                       Split.sql   ******************************************************************************/ /******************************************************************************   Split a delimited string into sub-components and return them as a table.   Parameter 1: Input string which is to be split into parts. Parameter 2: Delimiter which determines the split points in input string. Works with space or spaces as delimiter. Split() is apostrophe-safe.   SYNTAX: SELECT * FROM Split('Dvorak,Debussy,Chopin,Holst', ',') SELECT * FROM Split('Denver|Seattle|San Diego|New York', '|') SELECT * FROM Split('Denver is the super-awesomest city of them all.', ' ')   ******************************************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE xtype = 'TF'       AND name = 'Split'       ) BEGIN       DROP FUNCTION Split END GO   CREATE FUNCTION Split (       @InputString                  VARCHAR(8000),       @Delimiter                    VARCHAR(50) )   RETURNS @Items TABLE (       Item                          VARCHAR(8000) )   AS BEGIN       IF @Delimiter = ' '       BEGIN             SET @Delimiter = ','             SET @InputString = REPLACE(@InputString, ' ', @Delimiter)       END         IF (@Delimiter IS NULL OR @Delimiter = '')             SET @Delimiter = ','   --INSERT INTO @Items VALUES (@Delimiter) -- Diagnostic --INSERT INTO @Items VALUES (@InputString) -- Diagnostic         DECLARE @Item                 VARCHAR(8000)       DECLARE @ItemList       VARCHAR(8000)       DECLARE @DelimIndex     INT         SET @ItemList = @InputString       SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       WHILE (@DelimIndex != 0)       BEGIN             SET @Item = SUBSTRING(@ItemList, 0, @DelimIndex)             INSERT INTO @Items VALUES (@Item)               -- Set @ItemList = @ItemList minus one less item             SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex)             SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       END -- End WHILE         IF @Item IS NOT NULL -- At least one delimiter was encountered in @InputString       BEGIN             SET @Item = @ItemList             INSERT INTO @Items VALUES (@Item)       END         -- No delimiters were encountered in @InputString, so just return @InputString       ELSE INSERT INTO @Items VALUES (@InputString)         RETURN   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON Split TO UserRole1 --GRANT SELECT ON Split TO UserRole2 --GO   The syntax is basically as follows: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C AND TABLE2.Id IN (SELECT * FROM Split(@IdList, ',')) @IdList is a parameter passed into the stored procedure, and the comma (',') is the delimiter you have chosen to split the parameter list on. You can also use it like this: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C HAVING COUNT(SELECT * FROM Split(@IdList, ',') Similarly, it can be used in other aggregate functions at run-time: SELECT MIN(SELECT * FROM Split(@IdList, ','), <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C GROUP BY <fields> Now that I've (hopefully effectively) explained the benefits to using this function and implementing it in one or more of your database objects, let me warn you of a caveat that you are likely to encounter.  You may have a team member who waits until the right moment to ask you a pointed question: "Doesn't this function just do the same thing as using the IN function?  Why didn't you just use that instead?  In other words, why bother with this function?" What's happening is, one or more team members has failed to understand the reason for implementing this kind of function in the first place.  (Note: this is THE MOST IMPORTANT ASPECT OF THIS POST). Allow me to outline a few pros to implementing this function, so you may effectively parry this question.  Touche. 1) Code consolidation.  You don't have to maintain what is basically the same code and logic, but with varying numbers of the same parameter in several SQL objects.  I'm not going to go into the cons related to using this function, because the afore mentioned team member is probably more than adept at pointing these out.  Remember, the real positive contribution is ou are decreasing the liklihood that your team fails to update all (x) duplicate copies of what are basically the same stored procedure, and so on...  This is the classic downside to duplicate code.  It is a virus, and you should kill it. You might be better off rejecting your team member's question, and responding with your own: "Would you rather maintain the same logic in multiple different stored procedures, and hope that the team doesn't forget to always update all of them at the same time?".  In his head, he might be thinking "yes, I would like to maintain several different copies of the same stored procedure", although you probably will not get such a direct response.  2) Added flexibility - you can use the Split function elsewhere, and for splitting your data in different ways.  Plus, you can use any kind of delimiter you wish.  How can you know today the ways in which you might want to examine your data tomorrow?  Segue to my next point. 3) Because the function takes a delimiter parameter, you can split the data in any number of ways.  This greatly increases the utility of such a function and enables your team to work with the data in a variety of different ways in the future.  You can split on a single char, symbol, word, or group of words.  You can split on spaces.  (The list goes on... test it out). Finally, you can dynamically define the behavior of a stored procedure (or other SQL object) at run time, through the use of this function.  Rather than have several objects that accomplish almost the same thing, why not have only one instead?

    Read the article

  • How to Disable Access to the Registry in Windows 7

    - by Mysticgeek
    If you don’t know what your doing in the Registry, you can mess up your computer pretty good. Today we show you how to prevent users from accessing the Registry and making any changes to it. Using Local Group Policy Editor Note: This method uses Group Policy Editor which is not available in Home versions of Windows. First type gpedit.msc into the Search box in the Start menu. When Group Policy Editor opens, navigate to User Configuration \ Administrative Templates then select System. Under Setting in the right panel double-click on Prevent access to registry editing tools. Select the radio button next to Enabled, click OK, then close out of Group Policy Editor. Now if a user tries to access the Registry… They will get the following message advising they cannot access it.   Using Registry Enabler & Disabler 3 If you’re using Home or Starter version of Windows 7, you can use a neat utility called Registry Enabler & Disabler (link below). This app works on XP and Vista as well. There is no installation involved so you can run it from a flash drive, disable the registry, then take the flash drive with you while a the user is on the machine.   Again, if the user tries to access the Registry they will get the following error… Using one of these options will stop users from gaining access to the Registry or running any registry hacks. Of course if you have a shared computer, you may want to set up other users with a Standard Account, as they won’t be able to make changes to the Registry anyway. Download Registry Enabler & Disabler 3 Similar Articles Productive Geek Tips Disable Notification Balloons in XPDisable/Enable Lock Workstation Functionality (Windows + L)Disable Windows Mobility Center in Windows 7 or VistaRegistry Hack to Disable Writing to USB DrivesSpeed Up Disk Access by Disabling Last Access Updating in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

  • WNA Configuration in OAM 11g

    - by P Patra
    Pre-Requisite: Kerberos authentication scheme has to exist. This is usually pre-configured OAM authentication scheme. It should have Authentication Level - "2", Challenge Method - "WNA", Challenge Direct URL - "/oam/server" and Authentication Module- "Kerberos". The default authentication scheme name is "KerberosScheme", this name can be changed. The DNS name has to be resolvable on the OAM Server. The DNS name with referrals to AD have to be resolvable on OAM Server. Ensure nslookup work for the referrals. Pre-Install: AD team to produce keytab file on the AD server by running ktpass command. Provide OAM Hostname to AD Team. Receive from AD team the following: Keypass file produced when running the ktpass command ktpass username ktpass password Copy the keytab file to convenient location in OAM install tree and rename the file if desired. For instance where oam-policy.xml file resides. i.e. /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Configure WNA Authentication on OAM Server: Create config file krb.config and set the environment variable to the path to this file: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf The variable KRB_CONFIG has to be set in the profile for the user that OAM java container(i.e. Wbelogic Server) runs as, so that this setting is available to the OAM server. i.e. "applmgr" user. In the krb.conf file specify: [libdefaults] default_realm= NOA.ABC.COM dns_lookup_realm= true dns_lookup_kdc= true ticket_lifetime= 24h forwardable= yes [realms] NOA.ABC.COM={ kdc=hub21.noa.abc.com:88 admin_server=hub21.noa.abc.com:749 default_domain=NOA.ABC.COM [domain_realm] .abc.com=ABC.COM abc.com=ABC.COM .noa.abc.com=NOA.ABC.COM noa.abc.com=NOA.ABC.COM Where hub21.noa.abc.com is load balanced DNS VIP name for AD Server and NOA.ABC.COM is the name of the domain. Create authentication policy to WNA protect the resource( i.e. EBSR12) and choose the "KerberosScheme" as authentication scheme. Login to OAM Console => Policy Configuration Tab => Browse Tab => Shared Components => Application Domains => IAM Suite => Authentication Policies => Create Name: ABC WNA Auth Policy Authentication Scheme: KerberosScheme Failure URL: http://hcm.noa.abc.com/cgi-bin/welcome Edit System Configuration for Kerberos System Configuration Tab => Access Manager Settings => expand Authentication Modules => expand Kerberos Authentication Module => double click on Kerberos Edit "Key Tab File" textbox - put in /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Edit "Principal" textbox - put in HTTP/[email protected] Edit "KRB Config File" textbox - put in /fa-gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Cilck "Apply" In the script setting environment for the WLS server where OAM is deployed set the variable: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Re-start OAM server and OAM Server Container( Weblogic Server)

    Read the article

  • Speaking at SQL Saturday 61 in Washington DC

    - by AllenMWhite
    The organizers of SQL Saturday #61 in DC (actually Reston, VA) created an Advanced DBA/Dev track for their event, which I think is cool. Both of the presentations I'll be doing there on Saturday are in that track. (In fact, they're the first two sessions of the day.) The first, Automate Policy-Based Management using PowerShell will walk through the basics of Policy-Based Management, and then show you how to build PowerShell scripts to create and evaluate your policies. The second, Gather SQL Server...(read more)

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • Process Rules!

    - by Ajay Khanna
    One of the key components of a process is “Business Rule”. Business rule takes many forms inside your process definition and in a way is a manifestation of your company’s business policy. Business rules inside the process are used for policy enforcement, governance, decision management, operations efficiency etc. Following are some basic types of rules that can be a part of your process. 1. Process conditions:  These are defined as the process gateways that determine a path process will take depending on the process parameters. For Example, if discount >10% go to approval path : if discount < 10% auto-approve order. 2. Data rules: These business rules are defined as facts in decision table or knowledge base. The process captures all required parameters and submits those to RETE based rules engine. Rules engine processes the data and returns the result back. For example, rules determining your insurance eligibility. 3. Event rules: Here the system is monitoring the various events and events patterns that are emerging inside the process or external to the process. You can define actions or alerts to be triggered when a certain pattern of events emerges over a specified time period. Such types of rules need Complex Event Processing and are used in applications like Credit Card Fraud detection or Utility Demand Response. 4. User Interface Rules: In order to add dynamic behavior to UI or to keep users from making mistakes and enforcing policy, another mechanism available is UI rules. They are evaluated as the end user is filling out the web forms. These may include enabling and disabling of UI as per business policy. An example could be, if the age of a user is less than 13 years, disable credit card field and enable parental approval required checkbox. Your process may include many of such rule types. Oracle OpenWorld provides a unique opportunity to listen to Oracle Business Process Management Experts and Customers.  We will discuss business rules during various sessions in Oracle OpenWorld. Two of the sessions specifically focused on business rules are listed below: Accelerating an Implementation of Complex Worldwide Business Approval Rules Wednesday, Oct 3, 10:15 AM Moscone South – 305 Oracle Business Rules Use Cases Design and Testing Wednesday, Oct 3, 3:30 PM Marriott Marquis - Golden Gate C3   Oracle Business Process Management Track covers a variety of topics, and speakers covering technology, methodology and best practices. You can see the list of Business process Management sessions here. Come back to this blog for more coverage from Oracle OpenWorld!

    Read the article

  • How to set MANPATH without overriding defaults?

    - by balki
    I have added extra directories to $PATH by exporting PATH=/my/dirs:$PATH But I am not sure if I should do the same to MANPATH. Because default MANPATH is empty yet man command works. I found a command called manpath and its manual says If $MANPATH is set, manpath will simply display its contents and issue a warning.. Does this mean setting MANPATH is not the right way to add directories for man command to search for manual pages?

    Read the article

  • Strange Flash AS3 xml Socket behavior

    - by Rnd_d
    I have a problem which I can't understand. To understand it I wrote a socket client on AS3 and a server on python/twisted, you can see the code of both applications below. Let's launch two clients at the same time, arrange them so that you can see both windows and press connection button in both windows. Then press and hold any button. What I'm expecting: Client with pressed button sends a message "some data" to the server, then the server sends this message to all the clients(including the original sender) . Then each client moves right the button 'connectButton' and prints a message to the log with time in the following format: "min:secs:milliseconds". What is going wrong: The motion is smooth in the client that sends the message, but in all other clients the motion is jerky. This happens because messages to those clients arrive later than to the original sending client. And if we have three clients (let's name them A,B,C) and we send a message from A, the sending time log of B and C will be the same. Why other clients recieve this messages later than the original sender? By the way, on ubuntu 10.04/chrome all the motion is smooth. Two clients are launched in separated chromes. windows screenshot Can't post linux screenshot, need more than 10 reputation to post more hyperlinks. Listing of log, four clients simultaneously: [16:29:33.280858] 62.140.224.1 >> some data [16:29:33.280912] 87.249.9.98 << some data [16:29:33.280970] 87.249.9.98 << some data [16:29:33.281025] 87.249.9.98 << some data [16:29:33.281079] 62.140.224.1 << some data [16:29:33.323267] 62.140.224.1 >> some data [16:29:33.323326] 87.249.9.98 << some data [16:29:33.323386] 87.249.9.98 << some data [16:29:33.323440] 87.249.9.98 << some data [16:29:33.323493] 62.140.224.1 << some data [16:29:34.123435] 62.140.224.1 >> some data [16:29:34.123525] 87.249.9.98 << some data [16:29:34.123593] 87.249.9.98 << some data [16:29:34.123648] 87.249.9.98 << some data [16:29:34.123702] 62.140.224.1 << some data AS3 client code package { import adobe.utils.CustomActions; import flash.display.Sprite; import flash.events.DataEvent; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.SecurityErrorEvent; import flash.net.XMLSocket; import flash.system.Security; import flash.text.TextField; public class Main extends Sprite { private var socket :XMLSocket; private var textField :TextField = new TextField; private var connectButton :TextField = new TextField; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(event:Event = null):void { socket = new XMLSocket(); socket.addEventListener(Event.CONNECT, connectHandler); socket.addEventListener(DataEvent.DATA, dataHandler); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); addChild(textField); textField.y = 50; textField.width = 780; textField.height = 500; textField.border = true; connectButton.selectable = false; connectButton.border = true; connectButton.addEventListener(MouseEvent.MOUSE_DOWN, connectMouseDownHandler); connectButton.width = 105; connectButton.height = 20; connectButton.text = "click here to connect"; addChild(connectButton); } private function connectHandler(event:Event):void { textField.appendText("Connect\n"); textField.appendText("Press and hold any key\n"); } private function dataHandler(event:DataEvent):void { var now:Date = new Date(); textField.appendText(event.data + " time = " + now.getMinutes() + ":" + now.getSeconds() + ":" + now.getMilliseconds() + "\n"); connectButton.x += 2; } private function keyDownHandler(event:KeyboardEvent):void { socket.send("some data"); } private function connectMouseDownHandler(event:MouseEvent):void { var connectAddress:String = "ep1c.org"; var connectPort:Number = 13250; Security.loadPolicyFile("xmlsocket://" + connectAddress + ":" + String(connectPort)); socket.connect(connectAddress, connectPort); } } } Python server code from twisted.internet import reactor from twisted.internet.protocol import ServerFactory from twisted.protocols.basic import LineOnlyReceiver import datetime class EchoProtocol(LineOnlyReceiver): ##### name = "" id = 0 delimiter = chr(0) ##### def getName(self): return self.transport.getPeer().host def connectionMade(self): self.id = self.factory.getNextId() print "New connection from %s - id:%s" % (self.getName(), self.id) self.factory.clientProtocols[self.id] = self def connectionLost(self, reason): print "Lost connection from "+ self.getName() del self.factory.clientProtocols[self.id] self.factory.sendMessageToAllClients(self.getName() + " has disconnected.") def lineReceived(self, line): print "[%s] %s >> %s" % (datetime.datetime.now().time(), self, line) if line=="<policy-file-request/>": data = """<?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <!-- Policy file for xmlsocket://ep1c.org --> <cross-domain-policy> <allow-access-from domain="*" to-ports="%s" /> </cross-domain-policy>""" % PORT self.send(data) else: self.factory.sendMessageToAllClients( line ) def send(self, line): print "[%s] %s << %s" % (datetime.datetime.now().time(), self, line) if line: self.transport.write( str(line) + chr(0)) else: print "Nothing to send" def __str__(self): return self.getName() class ChatProtocolFactory(ServerFactory): protocol = EchoProtocol def __init__(self): self.clientProtocols = {} self.nextId = 0 def getNextId(self): id = self.nextId self.nextId += 1 return id def sendMessageToAllClients(self, msg): for client in self.clientProtocols: self.clientProtocols[client].send(msg) def sendMessageToClient(self, id, msg): self.clientProtocols[id].send(msg) PORT = 13250 print "Starting Server" factory = ChatProtocolFactory() reactor.listenTCP(PORT, factory) reactor.run()

    Read the article

  • I get an "serious errors while checking the disk drives for /boot" error while booting

    - by Vaios Argiropoulos
    I have just set up my new system by creating three partitions on a whole hard disk (/boot,/home,swap and /(root)) .I saw this article and now i am getting those errors and the system provide me with some options (I to ignore, S to skip mounting or M for manual recovery). I am forced to choose skip mounting because i don't know what manual recovery is. Despite the error i get i think that the system boots ok because i can use the system but this error is very annoying.

    Read the article

  • Check SQL Server Virtual Log Files Using PowerShell

    In a previous tip on Monitor Your SQL Server Virtual Log Files with Policy Based Management, we have seen how we can use Policy Based Management to monitor the number of virtual log files (VLFs) in our SQL Server databases. However, even with that most of the solutions I see online involve the creation of temporary tables and/or a combination of using cursors to get the total number of VLFs in a transaction log file. Is there a much easier solution?

    Read the article

  • ???Identity Management???????????

    - by ???02
    ???Identity Management?????????????·??????????Identity Management?????????? ?????????????????????????????????·?????????·???(?????)??????ID?????????????????????????????????????????????(??·????????????????)???????????????????????????????????????????????????????????http://www.oracle.com/jp/sun/index.html??????????????????????????????????????????????????????????????????????IT????????????????????????????????????????????????IT???????????????????????????????·?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????http://www.oracle.com/jp/support/lifetime-support/lifetime-support-policy-079302-ja.htmlFusion Middleware?????????????????????Lifetime Support Policy: Oracle Fusion Middleware Products??????·????·????FAQhttp://www.oracle.com/jp/support/faq/faq-lifetime-079289-ja.html????Sun Java System Identity Manager 8.1?????????????(Extend Support?)?2017?10?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????

    Read the article

  • Looking for definitive answer to accessing a network drive/NAS/SMB drive via Windows 7 HOME and Windows 7 Professional. Is it possible and how?

    - by Rob
    I want to be able to access my Lacie 2Big network drive in Windows 7 Explorer. I have a machine with Windows 7 Home and one with Windows 7 Professional. Neither Windows 7, home or pro, can access the drive. The Windows 7 Home machine displays the drive in its Explorer, with the capacity, but on clicking the icon, I get another window, blank with the busy pointer which does not eventually stop. The drive is working perfectly. How do I know this? Because I can access it with no problems on my Apple Mac, Windows XP home and Ubuntu machines on the same network as the Windows 7 machines. Except for the Windows XP home machine that required Lacie ethernet agent program, the Mac and the Ubuntu machines needed no setup, the drive appeared like any other drive. So my 2 questions: Is it possible to access a network share drive, e.g. a NAS like Lacie 2big in Windows 7 Home Premium and Windows 7 Professional. If so how? I read on Microsoft's own forums and elsewhere that network sharing drives, e.g. via SambaSMB is NOT possible on Windows 7 Home. Is this true? http://social.technet.microsoft.com/Forums/en-US/w7itprovirt/thread/e08c3500-a722-4b44-b644-64f94f63c8e5/ This question is a more comprehensive re-write of my earlier question: Windows 7 / TCP/IP network share guide - looking for to resolve failure to mount lacie network drive but works on XP,Linux,Mac. ...where I haven't received a solving answer, and I have tried to find a solution myself. Lacie themselves haven't offered a definitive solving answer either, but I suspect it's not just their drives but SMB/network share/NAS in general... This is utterly pathetic that Windows 7 home cannot access something as simple as a network drive, especially given that Windows XP home can. My research so far: Apparently it is possible on Windows 7 Professional, via the Local Security Policy, only on Windows 7 Professional, not Windows 7 Home: http://www.sevenforums.com/tutorials/7357-local-security-policy-editor-open.html http://answers.microsoft.com/en-us/windows/forum/windows_7-security/accessing-local-security-policy-in-windows-7-home/0c8300d0-1d23-4de0-9b37-935c01a7d17a http://social.technet.microsoft.com/Forums/en-US/w7itprosecurity/thread/14fc5037-3386-4973-b5d8-2167272ff5ad/ http://www.tomshardware.com/forum/75-63-windows-samba-issue Another solution offered is editing the registry, doesn't look promising to me, fiddly and not guaranteed, hard to produce a complete solution I think, given that everyone's registry can vary. Registry key edit solutons: https://www.lacie.com/uk/mystuff/ticket/ticket.htm?tid=101278940 http://networksecurity.farzadbanifatemi.com/security-policy/how-to-access-local-security-policy-windows-7-home-premium Related: Does Windows 7 Home Premium support backing up to a network share Network Copy to Windows 7 File Share Fails and Kills Network Connection

    Read the article

  • seaudit report detail

    - by user1014130
    I've just started using selinux in the last 6 months and am getting to grips with it. However, using sealert on a new CENTOS 6 server, Im not getting the level of detail I was with CENTOS 5. To illustrate: Running sealert -a /var/log/audit/audit.log On CENTOS 5 I get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context root:system_r:postfix_postdrop_t Target Context system_u:object_r:httpd_log_t Target Objects /var/log/httpd/error_log [ file ] Source postdrop Source Path /usr/sbin/postdrop Port Host Source RPM Packages postfix-2.3.3-2.1.el5_2 Target RPM Packages Policy RPM selinux-policy-2.4.6-279.el5_5.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name server109-228-26-144.live-servers.net Platform Linux server109-228-26-144.live-servers.net 2.6.18-194.8.1.el5 #1 SMP Thu Jul 1 19:04:48 EDT 2010 x86_64 x86_64 Alert Count 1 First Seen Wed Jun 13 11:43:55 2012 Last Seen Wed Jun 13 11:43:55 2012 but on CENTOS 6 I just get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Im running exactly the same command. Does anyone have any idea why Im not getting the "Additional information" that I do with CENTOS 5? Thanks in advance Dylan

    Read the article

  • Sending email notifications to users

    - by Web Girl
    What is the preferable way to send email notifications to users? I can do it both ways but what is better? have some c# code that calls stored procedure in the database. Stored procedure based on some logic pulls all the emails data and sends email using database mail or c# code calls stored procedure, gets all the nesessary data back and sends email itself using smtp server etc. I just wonder what is the preferable way in the sense of performance etc... C# code is a library that would be a part of the web application. So it's where it's better to put the load, on the application server or the database server? System will not be crazy busy, it's not like Amazon or something. But still it would be nice to create something that makes sense.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • Is Query Performance different for different versions of SQL Server?

    - by Ronak Mathia
    I have fired 3 update queries in my stored procedure for 3 different tables. Each table contains almost 2,00,000 records and all records have to be updated. I am using indexing to speed up the performance. It quite working well with SQL Server 2008. stored procedure takes only 12 to 15 minutes to execute. (updates almost 1000 rows in 1 second in all three tables) But when I run same scenario with SQL Server 2008 R2 then stored procedure takes more time to complete execution. its about 55 to 60 minutes. (updates almost 100 rows in 1 second in all three tables). I couldn't find any reason or solution for that. I have also tested same scenario with SQL Server 2012. but result is same as above. Please give suggestions.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >