Search Results

Search found 3994 results on 160 pages for 'dual wan'.

Page 149/160 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • hostapd running on Ubuntu Server 13.04 only allows single station to connect when using wpa

    - by user450688
    Problem Only a single station can connect to hostapd at a time. Any single station can connect (W8, OSX, iOS, Nexus) but when two or more hosts are connected at the same time the first client loses its connectivity. However there are no connectivity issues when WPA is not used. Setup Linux (Ubuntu server 13.04) wireless router (with separate networks for wired WAN, wired LAN, and Wireless LAN. iptables-save output: *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.0.0/24 -o p4p1 -j MASQUERADE -A POSTROUTING -s 10.0.1.0/24 -o p4p1 -j MASQUERADE COMMIT *mangle :PREROUTING ACCEPT [13:916] :INPUT ACCEPT [9:708] :FORWARD ACCEPT [4:208] :OUTPUT ACCEPT [9:3492] :POSTROUTING ACCEPT [13:3700] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [9:3492] -A INPUT -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i p4p1 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i wlan0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A FORWARD -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -j ACCEPT -A FORWARD -i wlan0 -j ACCEPT -A FORWARD -i lo -j ACCEPT COMMIT /etc/hostapd/hostapd.conf #Wireless Interface interface=wlan0 driver=nl80211 ssid=<removed> hw_mode=g channel=6 max_num_sta=15 auth_algs=3 ieee80211n=1 wmm_enabled=1 wme_enabled=1 #Configure Hardware Capabilities of Interface ht_capab=[HT40+][SMPS-STATIC][GF][SHORT-GI-20][SHORT-GI-40][RX-STBC12] #Accept all MAC address macaddr_acl=0 #Shared Key Authentication wpa=1 wpa_passphrase=<removed> wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP rsn_pairwise=CCMP ###IPad Connectivevity Repair ieee8021x=0 eap_server=0 Wireless Card #lshw output product: RT2790 Wireless 802.11n 1T/2R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 00 serial: <removed> width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=rt2800pci driverversion=3.8.0-25-generic firmware=0.34 ip=10.0.1.254 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn #iw list output Band 1: Capabilities: 0x272 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 2-streams Max AMSDU length: 3839 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-15, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (27.0 dBm) * 2417 MHz [2] (27.0 dBm) * 2422 MHz [3] (27.0 dBm) * 2427 MHz [4] (27.0 dBm) * 2432 MHz [5] (27.0 dBm) * 2437 MHz [6] (27.0 dBm) * 2442 MHz [7] (27.0 dBm) * 2447 MHz [8] (27.0 dBm) * 2452 MHz [9] (27.0 dBm) * 2457 MHz [10] (27.0 dBm) * 2462 MHz [11] (27.0 dBm) * 2467 MHz [12] (disabled) * 2472 MHz [13] (disabled) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) Available Antennas: TX 0 RX 0 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ AP } <= 8, total <= 8, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * set_tx_bitrate_mask * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (84) * Unknown command (87) * Unknown command (85) * Unknown command (89) * Unknown command (92) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * Unknown mode (10): 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 Supported RX frame types: * IBSS: 0x40 0xb0 0xc0 0xd0 * managed: 0x40 0xd0 * AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * mesh point: 0xb0 0xc0 0xd0 * P2P-client: 0x40 0xd0 * P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * Unknown mode (10): 0x40 0xd0 Device supports RSN-IBSS. HT Capability overrides: * MCS: ff ff ff ff ff ff ff ff ff ff * maximum A-MSDU length * supported channel width * short GI for 40 MHz * max A-MPDU length exponent * min MPDU start spacing Device supports TX status socket option. Device supports HT-IBSS.

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • IPv6: Should I have private addresses?

    - by AlReece45
    Right now, we have a rack of servers. Every server right now has at least 2 IP addresses, one for the public interface, another for the private. The servers that have SSL websites on them have more IP addresses. We also have virtual servers, that are configured similarly. Private Network The private range is currently just used for backups and monitoring. Its a gigabit port, the interface usage does not usually get very high. There are other technologies we're considering using that would use this port: iSCSI (implementations usually recommends dedicating an interface to it, which would be yet another IP network), VPN to get access to the private range (something I'd rather avoid) dedicated database servers LDAP centralized configuration (like puppet) centralized logging We don't have any private addresses in our DNS records (only public addresses). For our servers to utilize the correct IP address for the right interface (and not hard code the IP address) probably requires setting up a private DNS server (So now we add 2 different dns entries to 2 different systems). Public Network Our public range has a variety of services include web, email, and ftp. There is a hardware firewall between our network and the "public" network. We have (relatively secure) method to instruct the firewall to open and close administrative access (web interfaces, ssh, etc) for our current IP address. With either solution discussed, the host-based firewalls will be configured as well. The public network currently runs at a dedicated 20Mbps link. There are a couple of legacy servers with fast-ethernet ports, but they are scheduled for decommissioning. All of the other production boxes have at least 2 Gigabit Ethernet ports. The more traffic-heavy servers have 4-6 available (none is using more than the 2 Gigabit ports right now). IPv6 I want to get an IPv6 prefix from our ISP. So at least every "server" has at least one IPv6 interface. We'll still need to keep the IPv4 addressees up and available for legacy clients (web servers and email at the very least). We have two IP networks right now. Adding the public IPv6 address would make it three. Just use IPv6? I'm thinking about just dumping the private IPv4 range and using the IPv6 range as the primary means of all communications. If an interface starts reaching its capacity, utilize the newly free interfaces to create a trunk. It has the advantage that if either the public or private traffic needs to exceed 1Gbps. The traffic for each interface is already analyzed on a regular basis to predict future bandwidth use. In the rare instances where bandwidth unexpected peaks: utilize QoS to ensure traffic (like our limited SSH access) is prioritized correctly so the problem can be corrected (if possible, our WAN is the bottleneck right now). It also has the advantage of not needing to make an entry for every private address. We may have private DNS (or just LDAP), but it'll be much more limited in scope with less entries to duplicate. Summary I'm trying to make this network as "simple" as possible. At the same time, I want to make sure its reliable, upgradeable, scalable, and (eventually) redundant. Having one IPv6 network, and a legacy IPv4 network seems to be the best solution to me. Regarding using assigned IPv6 addresses for both networks, sharing the available bandwidth on one (more trunked if needed): Are there any technical disadvantages (limitations, buffers, scalability)? Are there any other security considerations (asides from firewalls mentioned above) to consider? Are there regulations or other security requirements (like PCI-DSS) that this doesn't meet? Is there typical software for setting up a Linux network that doesn't have IPv6 support yet? (logging, ldap, puppet) Some other thing I didn't consider?

    Read the article

  • SQL Server 2000 and SSL Encryption

    - by Angry_IT_Guru
    We are a datacenter that hsots a SQL Server 2000 environment which provides database services for a product we sell that is loaded as a rich-client applicatin at each of our many clients and their workstations. Currently today, the application uses straight ODBC connections from the client site to our datacenter. We need to begin encrypting the credentials -- since everything is clear-text today and the authentication is weakly encrypted -- and I'm trying to determine the best way to implement SSL on the server with minimizing the impact of the client. A few things, however: 1) We have our own Windows domain and all our servers are joined to our private domain. Our clietns no nothing of our domain. 2) Typically, our clients connect to our datacenter servers either by: a) Using TCP/IP address b) Using a DNS name that we publish via internet, zone transfers from our DNS servers to our customers, or the client can add static HOSTS entries. 3) From what I understand from enabling encryption is that I can go to the Network Utility and select the "encryption" option for the protocol that I wish to encrypt. Such as TCP/IP. 4) When the encryption option is selected, I have a choice of installing a third-party certificate or a self-signed. I have tested the self-signed, but do have potential issues. I'll explain in a bit. If I go with a third-party cert, such as Verisign, or Network solutions... what kind of certificate do I request? These aren't IIS certificates? When I go create a self-signed via Microsoft's certificate server, I have to select "Authentication certificate". What does this translate to in the third-party world? 5) If I create a self-signed certificate, I understand that the "issue to" name has to match the FQDN for the server that is running SQL. In my case, I have to use my private domain name. If I use this, what does this do for my clients when trying to connect to my SQL Server? Surely they cannot resolve my private DNS names on their network.... I've also verified that when the self-signed certificate is installed, it has to be in the local personal store for the user account that is running SQL Server. SQL Server will only start if the FQDN matches the "issue to" of the certificate and SQL is running under the account that has the certificate installed. If I use a self-signed certificate, does this mean I have to have every one of my clients install it to verify? 6) If I used a third-party certificate, which sounds like the best option, do all my clients have to have internet access when accessing my private servers of their private WAN connection to use to verify the certificate? What do I do about the FQDN? It sounds like they have to use my private domain name -- which is not published -- and can no longer use the one that I setup for them to use? 7) I plan on upgrading to SQL 2000 soon. Is setup of SSL any easier/better with SQL 2005 than SQL 2000? Any help or guiadance would be appreciated

    Read the article

  • Weblogic JDBC datasource,java.sql.SQLException: Cannot obtain XAConnection weblogic.common.resourcep

    - by gauravkarnatak
    I am using weblogic JDBC datasource and my DB is oracle 10g,below is the configuration. It used to work fine but suddenly it started giving problem,please see below exception. Weblogic JDBC datasource,java.sql.SQLException: Cannot obtain XAConnection weblogic.common.resourcepool.ResourceLimitException: No resources currently available in pool <?xml version="1.0" encoding="UTF-8"?> <jdbc-data-source xmlns="http://www.bea.com/ns/weblogic/90" xmlns:sec="http://www.bea.com/ns/weblogic/90/security" xmlns:wls="http://www.bea.com/ns/weblogic/90/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/920 http://www.bea.com/ns/weblogic/920.xsd" XL-Reference-DS jdbc:oracle:oci:@abc.COM oracle.jdbc.driver.OracleDriver user DEV_260908 password password dll ocijdbc10 protocol oci oracle.jdbc.V8Compatible true baseDriverClass oracle.jdbc.driver.OracleDriver 1 100 1 true SQL SELECT 1 FROM DUAL DataJndi OnePhaseCommit This exception is coming on dev environment where connected user is only one. I know, this is related to pool max size but I also suspect this could be due to oracle,might be oracle isn't able to create connections. below are my queries Is there any debug/logging parameter to enable datasource logging,so that I can check no of connections acquired,released and unused in logs ? How to check oracle connection limit for a particular user ?

    Read the article

  • Using a Mac for cross platform development?

    - by mdec
    Who uses Macs for cross-platform development? By cross platform I essentially mean you can compile to target Windows or Unix (not necessarily both at the same time). I understand that this also has a lot to do with writing portable code, but I am more interested in people's experience with Mac OS X to develop software. I understand that there are a range of IDEs to choose from, I would probably use Eclipse (I like the GCC toolchain) however Xcode seems to be quite popular. Could it be used as described above? At a pinch I could always virtualise with VirtualBox or VMware Player or parallels to use Visual Studio (or dual boot for that matter). Having said that I am open to any other suggested compilers (with preferably an IDE that uses GCC.) Also with the range of Macs available, which one would you recommend? I would prefer a laptop (as I already have a desktop) but am unsure of reasonable specifications. If you are currently using a Mac to do development, I would love to hear what you develop on your Mac and what you like and don't like about it. I would primarily be developing in C/C++/Java. I am also looking to experiment with Boost and Qt, so I'm interested in hearing about any (potential) compatibility issues. If you have any other tips I'd love you hear what you have to say.

    Read the article

  • WCF push to client through firewall?

    - by Sire
    See also How does a WCF server inform a WCF client about changes? (Better solution then simple polling, e.g. Coment or long polling) I need to use push-technology with WCF through client firewalls. This must be a common problem, and I know for a fact it works in theory (see links below), but I have failed to get it working, and I haven't been able to find a code sample that demonstrates it. Requirements: WCF Clients connects to server through tcp port 80 (netTcpBinding). Server pushes back information at irregular intervals (1 min to several hours). Users should not have to configure their firewalls, server pushes must pass through firewalls that have all inbound ports closed. TCP duplex on the same connection is needed for this, a dual binding does not work since a port has to be opened on the client firewall. Clients sends heartbeats to server at regular intervals (perhaps every 15 mins) so server knows client is still alive. Server is IIS7 with WAS. The solution seems to be duplex netTcpBinding. Based on this information: WCF through firewalls and NATs Keeping connections open in IIS But I have yet to find a code sample that works.. I've tried combining the "Duplex" and "TcpActivation" samples from Microsoft's WCF Samples without any luck. Please can someone point me to example code that works, or build a small sample app. Thanks a lot!

    Read the article

  • Getting Oracle's MD5 to match PHP's MD5

    - by Zenshai
    Hi all, I'm trying to compare an MD5 checksum generated by PHP to one generated by Oracle 10g. However it seems I'm comparing apples to oranges. Here's what I did to test the comparison: //md5 tests //php md5 print md5('testingthemd5function'); print '<br/><br/>'; //oracle md5 $md5query = "select md5hash('testingthemd5function') from dual"; $stid = oci_parse($conn, $md5query); if (!$stid) { $e = oci_error($conn); print htmlentities($e['message']); exit; } $r = oci_execute($stid, OCI_DEFAULT); if (!$r) { $e = oci_error($stid); echo htmlentities($e['message']); exit; } $row = oci_fetch_row($stid); print $row[0]; The md5 function (seen in the query above) in Oracle uses the 'dbms_obfuscation_toolkit.md5' package(?) and is defined like this: CREATE OR REPLACE FUNCTION PORTAL.md5hash (v_input_string in varchar2) return varchar2 is v_checksum varchar2(20); begin v_checksum := dbms_obfuscation_toolkit.md5 (input_string => v_input_string); return v_checksum; end; What comes out on my PHP page is: 29dbb90ea99a397b946518c84f45e016 )Û¹©š9{”eÈOEà Can anyone help me in getting the two to match?

    Read the article

  • concatenate rows of Clob with plsql

    - by david K
    Hi, late considere i conider if got a table who got an Id and a clob content like: create table v_EXAMPLE_L ( nip number, xmlcontent clob ); we insert our data: Insert into V_EXAMPLE_L (NIP,XMLCONTENT) values (17852,'delta548484646846484'); Insert into V_EXAMPLE_L (NIP,XMLCONTENT) values (17852,'omega545648468484'); Insert into V_EXAMPLE_L (NIP,XMLCONTENT) values (17852, 'gamma54564846qsdqsdqsdqsd8484'); i'm trying do do a function that concatenate the rows of the clob that gone be the result of a select , i mean without having to give multiple parameter about the name of table or such , i should only give here the column that contain the clobs , and she should handle the rest!. CREATE OR REPLACE function assemble_clob(q varchar2) return clob is v_clob clob; tmp_lob clob; hold VARCHAR2(4000); --cursor c2 is select xmlcontent from V_EXAMPLE_L where id=17852 cur sys_refcursor; begin OPEN cur FOR q; LOOP FETCH cur INTO tmp_lob; EXIT WHEN cur%NOTFOUND; --v_clob := v_clob || XMLTYPE.getClobVal(tmp_lob.xmlcontent); v_clob := v_clob || tmp_lob; END LOOP; return (v_clob); --return (dbms_xmlquery.getXml( dbms_xmlquery.set_context("Select 1 from dual")) ) end assemble_clob; the function is broken ... (if anybody could give me a help, thanks a lot, and i'm noob in sql so ....). and thanks

    Read the article

  • How do I declare an IStream in idl so visual studio maps it to s.w.interop.comtypes?

    - by Grahame Grieve
    hi I have a COM object that takes needs to take a stream from a C# client and processes it. It would appear that I should use IStream. So I write my idl like below. Then I use MIDL to compile to a tlb, and compile up my solution, register it, and then add a reference to my library to a C# project. Visual Studio creates an IStream definition in my own library. How can I stop it from doing that, and get it to use the COMTypes IStream? It seems there would be one of 3 answers: add some import to the idl so it doesn't redeclare IStream (importing MSCOREE does that, but doesn't solve the C# problem) somehow alias the IStream in visual studio - but I don't see how to do this. All my thinking i s completely wrong and I shouldn't be using IStream at all help...thanks [ uuid(3AC11584-7F6A-493A-9C90-588560DF8769), version(1.0), ] library TestLibrary { importlib("stdole2.tlb"); [ uuid(09FF25EC-6A21-423B-A5FD-BCB691F93C0C), version(1.0), helpstring("Just for testing"), dual, nonextensible, oleautomation ] interface ITest: IDispatch { [id(0x00000006),helpstring("Testing stream")] HRESULT _stdcall LoadFromStream([in] IStream * stream, [out, retval] IMyTest ** ResultValue); }; [ uuid(CC2864E4-55BA-4057-8687-29153BE3E046), noncreatable, version(1.0) ] coclass HCTest { [default] interface ITest; }; };

    Read the article

  • SimpleJdbcCall ignoring JdbcTemplate fetch size

    - by user289429
    We are calling the pl/sql stored procedure through Spring SimpleJdbcCall, the fetchsize set on the JdbcTemplate is being ignored by SimpleJdbcCall. The rowmapper resultset fetch size is set to 10 even though we have set the jdbctemplate fetchsize to 200. Any idea why this happens and how to fix it? Have printed the fetchsize of resultset in the rowmapper in the below code snippet - Once it is 200 and other time it is 10 even though I use the same JdbcTemplate on both occassion. This direct execution through jdbctemplate returns fetchsize of 200 in the row mapper jdbcTemplate = new JdbcTemplate(ds); jdbcTemplate.setResultsMapCaseInsensitive(true); jdbcTemplate.setFetchSize(200); List temp = jdbcTemplate.query("select 1 from dual", new ParameterizedRowMapper() { public Object mapRow(ResultSet resultSet, int i) throws SQLException { System.out.println("Direct template : " + resultSet.getFetchSize()); return new String(resultSet.getString(1)); } }); This execution through SimpleJdbcCall is always returning fetchsize of 10 in the rowmapper jdbcCall = new SimpleJdbcCall(jdbcTemplate).withSchemaName(schemaName) .withCatalogName(catalogName).withProcedureName(functionName); jdbcCall.returningResultSet((String) outItValues.next(), new ParameterizedRowMapper<Map<String, Object>>() { public Map<String, Object> mapRow(ResultSet rs, int row) throws SQLException { System.out.println("Through simplejdbccall " + rs.getFetchSize()); return extractRS(rs, row); } }); outputList = (List<Map<String, Object>>) jdbcCall.executeObject(List.class, inParam);

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • Frame Accurate Browser Launchable Video Player ... ?

    - by cliftonc
    I have a requirement where I need to enable playback (full screen) of a h.264 MPEG4 (thanks for the correction!) video from a local network, launchable from a browser link on a Windows workstation, and be frame accurate. By frame accurate I mean that I need to be able to scrub through the video in the same way you would with a vtr, stop at a frame, and then move backwards and forwards frame by frame (it is for a very specific compliance requirement where have to be able to check every frame if there is something that is potentially against broadcasting guidelines). The application itself is used to capture notes while viewing the material, so the end model is for a dual monitor workstation, with a web form in one, the video playing full screen in the second (no issue launching the video and manually having to move it to the second screen), and then the user controls the video via keyboard shortcuts or a jog shuttle. I have looked at QT, but the java bindings seem to be dead or nearly so, flash isn't frame accurate, VLC given its streaming heritage seems to be only able to move forward by a frame and not backwards, and all I have left are commercial offerings that in my experience are difficult and expensive to change. Any ideas of where I should look or alternative options? Any advice appreciated!

    Read the article

  • MySQL get list of unique items in a SET

    - by The Disintegrator
    I have a products table with a column of type SET (called especialidad), with these possible values. [0] => NADA [1] => Freestyle / BMX [2] => Street / Dirt [3] => XC / Rural Bike [4] => All Mountain [5] => Freeride / Downhill / Dual / 4x [6] => Ruta / Triathlon / Pista [7] => Comfort / City / Paseo [8] => Kids [9] => Playera / Chopper / Custom [10] => MTB recreacion [11] => Spinning / Fitness Any given product can have one or many of these i/e "Freestyle / BMX,Street / Dirt" Given a subset of the rows, I need to get a list of all the present "especialidad" values. But I need a list to be exploded and unique Article1: "Freestyle / BMX,Street / Dirt" Article2: "Street / Dirt,Kids" Article2: "Kids" Article4: "Street / Dirt,All Mountain" Article5: "Street / Dirt" I need a list like this Freestyle / BMX Street / Dirt" Kids" All Mountain" I tried with group_concat(UNIQUE) but I get a list of the permutations...

    Read the article

  • Ideal Multi-Developer Lamp Stack?

    - by devians
    I would like to build an 'ideal' lamp development stack. Dual Server (Virtualised, ESX) Apache / PHP on one, Databases (MySQL, PgSQL, etc) on the other. User (Developer) Manageable mini environments, or instance. Each developer instance shares the top level config (available modules and default config etc) A developer should have control over their apache and php version for each project. A developer might be able to change minor settings, ie magicquotes on for legacy code. Each project would determine its database provider in its code The idea is that it is one administrate-able server that I can control, and provide globally configured things like APC, Memcached, XDebug etc. Then by moving into subsets for each project, i can allow my users to quickly control their environments for various projects. Essentially I'm proposing the typical system of a developer running their own stack on their own machine, but centralised. In this way I'd hope to avoid problems like Cross OS code problems, database inconsistencies, slightly different installs producing bugs etc. I'm happy to manage this in custom builds from source, but if at all possible it would be great to have a large portion of it managed with some sort of package management. We typically use CentOS, so yum? Has anyone ever built anything like this before? Is there something turnkey that is similar to what I have described? Are there any useful guides I should be reading in order to build something like this?

    Read the article

  • Programming Technique: How to create a simple card game

    - by Shyam
    Hi, As I am learning the Ruby language, I am getting closer to actual programming. So I was thinking of creating a simple card game. My question isn't Ruby orientated, but I do know want to learn how to solve this problem with a genuine OOP approach. In my card game I want to have four players. Using a standard deck with 52 cards, no jokers/wildcards. In the game I won't use the Ace as a dual card, it is always the highest card. So, the programming problems I wonder about are the following: How can I sort/randomize the deck of cards? There are four types, each having 13 values. Eventually there can be only unique values, so picking random values could generate duplicates. How can I implement a simple AI? As there are tons of card games, someone would have figured this part out already, so references would be great. I am a truly Ruby nuby, and my goal here is to learn to solve problems, so pseudo code would be great, just to understand how to solve the problem programmatically. I apologize for my grammar and writing style if it's unclear, for it is not my native language. Also pointers to sites where such challenges are explained, would be a great resource! Thank you for your comments, answers and feedback!

    Read the article

  • Always get exception when trying to Fill data to DataTable

    - by Sambath
    The code below is just a test to connect to an Oracle database and fill data to a DataTable. After executing the statement da.Fill(dt);, I always get the exception "Exception of type 'System.OutOfMemoryException' was thrown.". Has anyone met this kind of error? My project is running on VS 2005, and my Oracle database version is 11g. My computer is using Windows Vista. If I copy this code to run on Windows XP, it works fine. Thank you. using System.Data; using Oracle.DataAccess.Client; ... string cnString = "data source=net_service_name; user id=username; password=xxx;"; OracleDataAdapter da = new OracleDataAdapter("select 1 from dual", cnString); try { DataTable dt = new DataTable(); da.Fill(dt); // Got error here Console.Write(dt.Rows.Count.ToString()); } catch (Exception e) { Console.Write(e.Message); // Exception of type 'System.OutOfMemoryException' was thrown. } Update I have no idea what happens to my computer. I just reinstall Oracle 11g, and then my code works normally.

    Read the article

  • F#: Tell me what I'm missing about using Async.Parallel

    - by JBristow
    ok, so I'm doing ProjectEuler Problem #14, and I'm fiddling around with optimizations in order to feel f# out. in the following code: let evenrule n = n / 2L let oddrule n = 3L * n + 1L let applyRule n = if n % 2L = 0L then evenrule n else oddrule n let runRules n = let rec loop a final = if a = 1L then final else loop (applyRule a) (final + 1L) n, loop (int64 n) 1L let testlist = seq {for i in 3 .. 2 .. 1000000 do yield i } let getAns sq = sq |> Seq.head let seqfil (a,acc) (b,curr) = if acc = curr then (a,acc) else if acc < curr then (b,curr) else (a,acc) let pmap f l = seq { for a in l do yield async {return f a} } |> Seq.map Async.RunSynchronously let pmap2 f l = seq { for a in l do yield async {return f a} } |> Async.Parallel |> Async.RunSynchronously let procseq f l = l |> f runRules |> Seq.reduce seqfil |> fst let timer = System.Diagnostics.Stopwatch() timer.Start() let ans1 = testlist |> procseq Seq.map // 837799 00:00:08.6251990 printfn "%A\t%A" ans1 timer.Elapsed timer.Reset() timer.Start() let ans2 = testlist |> procseq pmap printfn "%A\t%A" ans2 timer.Elapsed // 837799 00:00:12.3010250 timer.Reset() timer.Start() let ans3 = testlist |> procseq pmap2 printfn "%A\t%A" ans3 timer.Elapsed // 837799 00:00:58.2413990 timer.Reset() Why does the Async.Parallel code run REALLY slow in comparison to the straight up map? I know I shouldn't see that much of an effect, since I'm only on a dual core mac. Please note that I do NOT want help solving problem #14, I just want to know what's up with my parallel code.

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Windows Screensaver Multi Monitor Problem

    - by Bryan
    I have a simple screensaver that I've written, which has been deployed to all of our company's client PCs. As most of our PCs have dual monitors, I took care to ensure that the screensaver ran on both displays. This works fine, however on some systems, where the primary screen has been swapped (to the left monitor), the screensaver only works on the left (primary) screen. The offending code is below. Can anyone see anything I've done wrong, or a better way of handling this? For info, the context of "this", is the screensaver form itself. // Make form full screen and on top of all other forms int minY = 0; int maxY = 0; int maxX = 0; int minX = 0; foreach (Screen screen in Screen.AllScreens) { // Find the bounds of all screens attached to the system if (screen.Bounds.Left < minX) minX = screen.Bounds.Left; if (screen.Bounds.Width > maxX) maxX = screen.Bounds.Width; if (screen.Bounds.Bottom < minY) minY = screen.Bounds.Bottom; if (screen.Bounds.Height > maxY) maxY = screen.Bounds.Height; } // Set the location and size of the form, so that it // fills the bounds of all screens attached to the system Location = new Point(minX, minY); Height = maxY - minY; Width = maxX - minX; Cursor.Hide(); TopMost = true;

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • Does the COM server have to call SysFreeString() for an [out] parameter?

    - by sharptooth
    We have the following interface: [object, uuid("uuidhere"), dual ] interface IInterface : IDispatch { [id(1), propget] HRESULT CoolProperty( [out, retval] BSTR* result ); } Now there's a minor problem. On one hand the parameter is "out" and so any value can be passed as input, the parameter will become valid only upon the successful return. On the other hand, there's this MSDN article which is linked to from many pages that basically says (the last paragraph) that if any function is passed a BSTR* it must free the string before assigning a new string. That's horrifying. If that article is right it means that all the callers must surely pass valid BSTRs (maybe null BSTRs), otherwise BSTR passed can be leaked. If the caller passed a random value and the callee tries to call SysFreeString() it runs into undefined behavior, so the convention is critical. Then what's the point in the [out] attribute? What will be the difference between the [in, out] and [out] in this situation? Is that article right? Do I need to free the passed BSTR [out] parameter before assigning a new one?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >