Search Results

Search found 4241 results on 170 pages for 'dual nic'.

Page 159/170 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Multiple Homed Windows 2008 Server / Windows 7 Client

    - by Daniel Scott
    I have a small Windows 2008 network, with some Windows 7 clients. The clients are both laptops with docking stations and I would like them to communicate with the Windows 2008 server (for filesharing) through the wired network whilst they're docked. Internet connectivity for all machines (clients and server) is via a Wireless LAN, so the wireless adapter in the Windows 7 clients stays active while they're docked. When the laptops are un-docked, it would be nice to still be able to contact the windows 2008 server for print sharing (and slower file sharing) - hence the server also being on the wireless LAN. The windows 2008 server is running Active Directory, DHCP and DNS. It controls DHCP leases on the wired network and holds the DNS records for "myserver.mycompany.local", which is what the filesharing clients connect to. Ideally I'd like the DNS records to return the wired IP first so that this is the address that the laptops will attempt initially - but there doesn't seem to be a way to do that? At present the server's IP on the wireless LAN comes out of an nslookup above the wired Lan IP. The multi-homing works perfectly - but in the wrong order! Switch on the wireless lan and ping myserver and it goes to the wireless IP. Disable the wireless on the client and do the same ping again and after a couple of seconds it starts pinging the wired address. Does anyone have any suggestions on how to make this work in a predictable order? - or even if it can work. Alternative 1? If it can't work, then would this work: Remove the wireless adapter from the server, put a wireless router/bridge on the wired network (set up to route to/from the wireless LAN's subnet), then configure the clients with two routes to the (now) single IP of the server with metrics favouring direct communication over the wired LAN first? Alternative 2? Should I instead single-home the laptops so all of their connectivity is via the wired-LAN while they're docked? (and route via the windows 2008 server - or a dedicated wireless bridge/router)? My concern here is that I'd like undocking to be seamless - and if the clients are in the middle of downloading something from the internet I wouldn't want whatever they're doing interupted as they switch IP addresses onto the Wireless network. Perhaps this isn't the case and I'm concerned over nothing? Any thoughts? :) UPDATE I seem to have cracked it (at least DNS entries come out in the order I hope for - and pinging the server with various combinations of wired, wireless and both interfaces enabled uses the IP I want) ... I set the binding order of the NICs on the Server (which is acting as Domain Controller, DHCP and DNS server) so that the Wired NIC is before the Wireless adapter. (Start -- type "Network Interfaces" -- Select "View Network Connections" -- Press Alt to show classic dropdown menus -- Advanced -- Advanced Settings) Now, an nslookup (from the client) of the server's hostname returns the Wired IP first, followed by the Wireless IP. The wired IP now seems to be used whenever it's contactable. Incidentally, the metrics on the wired and wireless routes (on the client) also favour the wired LAN (based on Windows' automatically assigned metrics) - but this was always the case, even when I was having trouble getting the wired IP to be "favoured". I'm not entirely sure if this is coincidence - or if a DNS server running on Windows, handing back IP addresses for itself does actually take the binding order of it's own network interfaces into account? It would be interesting to hear from someone who can confirm or deny that (or confirm that the binding order on the server plays a role for some other reason?)

    Read the article

  • CentOS 6.2 Bridge Setup for KVM

    - by Gaia
    I'm trying to set up bridged networking with KVM on CentOS 6.2 to no avail. There are plenty of docs and tutorials about it, but they all seem to conflict or don't provide info specific enough to my situation. I just don't get it. I access the host via public IP "xxx.xxx.128.58". All other available IPs (/29) should be bridged and made available to the only KVM guest (running a public facing LAMP stack) that will be setup on this machine. The amazingly unhelpful NOC people assigned the extra IPs to eth1. Is this correct? Should br0 bridge to eth0 or eth1? How do I set this up? Here is the relevant info: eth0 Link encap:Ethernet HWaddr 00:25:90:68:FE:BC inet6 addr: fe80::225:90ff:fe68:febc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:763 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:550811 (537.9 KiB) TX bytes:648 (648.0 b) Memory:fb980000-fba00000 eth1 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.58 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 inet6 addr: fe80::225:90ff:fe68:febd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1806 errors:0 dropped:0 overruns:0 frame:0 TX packets:1505 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:133166 (130.0 KiB) TX bytes:106070 (103.5 KiB) Memory:fb900000-fb980000 eth1:0 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.59 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:1 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.60 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:2 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.61 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 eth1:3 Link encap:Ethernet HWaddr 00:25:90:68:FE:BD inet addr:xxx.xxx.128.62 Bcast:xxx.xxx.128.63 Mask:255.255.255.248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Memory:fb900000-fb980000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) virbr0 Link encap:Ethernet HWaddr 52:54:00:62:55:68 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) > cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=XXXX.domain.com > brctl show bridge name bridge id STP enabled interfaces br0 8000.00259068febc no eth0 virbr0 8000.525400625568 yes virbr0-nic > ls -fl | grep ifcfg -rw-r--r-- 1 root root 198 Jun 7 10:58 ifcfg-eth0 -rw-r--r--. 1 root root 254 Oct 7 2011 ifcfg-lo -rw-r--r-- 1 root root 77 Jun 6 18:51 ifcfg-eth1-range0 -rw-r--r-- 1 root root 168 Jun 6 18:50 ifcfg-eth1 > cat ifcfg-eth0 DEVICE="eth0" BOOTPROTO="static" BRIDGE="br0" HWADDR="00:25:90:68:FE:BC" IPV6INIT="yes" MTU="1500" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" IPADDR="yyy.yyy.216.131" NETMASK="255.255.255.128" > cat ifcfg-eth1 DEVICE="eth1" HWADDR="00:25:90:68:FE:BD" NM_CONTROLLED="yes" ONBOOT="yes" BOOTPROTO="static" IPADDR="xxx.xxx.128.58" NETMASK="255.255.255.248" GATEWAY="xxx.xxx.128.57" > cat ifcfg-eth1-range0 IPADDR_START="xxx.xxx.128.59" IPADDR_END="xxx.xxx.128.62" CLONENUM_START="0" Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xxx.xxx.128.56 * 255.255.255.248 U 0 0 0 eth1 192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0 link-local * 255.255.0.0 U 1003 0 0 eth1 default xxx.xxx.128.57 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • Where is my VMware-ws FreeNAS CIFS(ZFS) bottle-neck?

    - by maka
    Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement. Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS. Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing ( 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%. EDIT: HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections. Hardware Asus F2A85-M Pro A10-5700 16GB DDR3 1600 OCZ Vertex 2 128GB SSD 2x Generic 1tb 7200 RPM drives as RAID0 (in win7) Intel Gigabit Desktop CT Software Host OS: Win7 (SSD) VMware Worksation 9 (SSD) FreeNAS 8.3 VM (20GB VDisk on SSD) CPU: I've tried 1, 2 and 4 cores. Virtualisation engine, Preferred mode: Automatic 10,24Gb ram 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS. NIC Bridge, Replicate physical network state Below are two typical process print-outs while I'm transfering one file to the CIFS share. last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26 32 processes: 2 running, 30 sleeping Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd 1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14 33 processes: 2 running, 31 sleeping Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd 1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.

    Read the article

  • suddenly can't connect to router

    - by Khoi
    I was just downloading some stuff in ubuntu and snap, the connection cut and I can't even connect to my router. And the router, it still works fine, my laptop can connect wirelessly to it as usual. But my main computer (which connects to it directly through cable) can't even ping it. Here is my ipconfig: Windows IP Configuration Host Name . . . . . . . . . . . . : vento Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Realtek RTL8169/8110 Family Gigabit Ethernet NIC Physical Address. . . . . . . . . : 00-19-DB-4E-6C-56 Ethernet adapter {15B1F740-2F35-4FE4-9FEE-4052AFBAD096}: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Anchorfree HSS Adapter - Packet Sche duler Miniport Physical Address. . . . . . . . . : 00-FF-15-B1-F7-40

    Read the article

  • Weblogic JDBC datasource,java.sql.SQLException: Cannot obtain XAConnection weblogic.common.resourcep

    - by gauravkarnatak
    I am using weblogic JDBC datasource and my DB is oracle 10g,below is the configuration. It used to work fine but suddenly it started giving problem,please see below exception. Weblogic JDBC datasource,java.sql.SQLException: Cannot obtain XAConnection weblogic.common.resourcepool.ResourceLimitException: No resources currently available in pool <?xml version="1.0" encoding="UTF-8"?> <jdbc-data-source xmlns="http://www.bea.com/ns/weblogic/90" xmlns:sec="http://www.bea.com/ns/weblogic/90/security" xmlns:wls="http://www.bea.com/ns/weblogic/90/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/920 http://www.bea.com/ns/weblogic/920.xsd" XL-Reference-DS jdbc:oracle:oci:@abc.COM oracle.jdbc.driver.OracleDriver user DEV_260908 password password dll ocijdbc10 protocol oci oracle.jdbc.V8Compatible true baseDriverClass oracle.jdbc.driver.OracleDriver 1 100 1 true SQL SELECT 1 FROM DUAL DataJndi OnePhaseCommit This exception is coming on dev environment where connected user is only one. I know, this is related to pool max size but I also suspect this could be due to oracle,might be oracle isn't able to create connections. below are my queries Is there any debug/logging parameter to enable datasource logging,so that I can check no of connections acquired,released and unused in logs ? How to check oracle connection limit for a particular user ?

    Read the article

  • Using a Mac for cross platform development?

    - by mdec
    Who uses Macs for cross-platform development? By cross platform I essentially mean you can compile to target Windows or Unix (not necessarily both at the same time). I understand that this also has a lot to do with writing portable code, but I am more interested in people's experience with Mac OS X to develop software. I understand that there are a range of IDEs to choose from, I would probably use Eclipse (I like the GCC toolchain) however Xcode seems to be quite popular. Could it be used as described above? At a pinch I could always virtualise with VirtualBox or VMware Player or parallels to use Visual Studio (or dual boot for that matter). Having said that I am open to any other suggested compilers (with preferably an IDE that uses GCC.) Also with the range of Macs available, which one would you recommend? I would prefer a laptop (as I already have a desktop) but am unsure of reasonable specifications. If you are currently using a Mac to do development, I would love to hear what you develop on your Mac and what you like and don't like about it. I would primarily be developing in C/C++/Java. I am also looking to experiment with Boost and Qt, so I'm interested in hearing about any (potential) compatibility issues. If you have any other tips I'd love you hear what you have to say.

    Read the article

  • WCF push to client through firewall?

    - by Sire
    See also How does a WCF server inform a WCF client about changes? (Better solution then simple polling, e.g. Coment or long polling) I need to use push-technology with WCF through client firewalls. This must be a common problem, and I know for a fact it works in theory (see links below), but I have failed to get it working, and I haven't been able to find a code sample that demonstrates it. Requirements: WCF Clients connects to server through tcp port 80 (netTcpBinding). Server pushes back information at irregular intervals (1 min to several hours). Users should not have to configure their firewalls, server pushes must pass through firewalls that have all inbound ports closed. TCP duplex on the same connection is needed for this, a dual binding does not work since a port has to be opened on the client firewall. Clients sends heartbeats to server at regular intervals (perhaps every 15 mins) so server knows client is still alive. Server is IIS7 with WAS. The solution seems to be duplex netTcpBinding. Based on this information: WCF through firewalls and NATs Keeping connections open in IIS But I have yet to find a code sample that works.. I've tried combining the "Duplex" and "TcpActivation" samples from Microsoft's WCF Samples without any luck. Please can someone point me to example code that works, or build a small sample app. Thanks a lot!

    Read the article

  • Getting Oracle's MD5 to match PHP's MD5

    - by Zenshai
    Hi all, I'm trying to compare an MD5 checksum generated by PHP to one generated by Oracle 10g. However it seems I'm comparing apples to oranges. Here's what I did to test the comparison: //md5 tests //php md5 print md5('testingthemd5function'); print '<br/><br/>'; //oracle md5 $md5query = "select md5hash('testingthemd5function') from dual"; $stid = oci_parse($conn, $md5query); if (!$stid) { $e = oci_error($conn); print htmlentities($e['message']); exit; } $r = oci_execute($stid, OCI_DEFAULT); if (!$r) { $e = oci_error($stid); echo htmlentities($e['message']); exit; } $row = oci_fetch_row($stid); print $row[0]; The md5 function (seen in the query above) in Oracle uses the 'dbms_obfuscation_toolkit.md5' package(?) and is defined like this: CREATE OR REPLACE FUNCTION PORTAL.md5hash (v_input_string in varchar2) return varchar2 is v_checksum varchar2(20); begin v_checksum := dbms_obfuscation_toolkit.md5 (input_string => v_input_string); return v_checksum; end; What comes out on my PHP page is: 29dbb90ea99a397b946518c84f45e016 )Û¹©š9{”eÈOEà Can anyone help me in getting the two to match?

    Read the article

  • concatenate rows of Clob with plsql

    - by david K
    Hi, late considere i conider if got a table who got an Id and a clob content like: create table v_EXAMPLE_L ( nip number, xmlcontent clob ); we insert our data: Insert into V_EXAMPLE_L (NIP,XMLCONTENT) values (17852,'delta548484646846484'); Insert into V_EXAMPLE_L (NIP,XMLCONTENT) values (17852,'omega545648468484'); Insert into V_EXAMPLE_L (NIP,XMLCONTENT) values (17852, 'gamma54564846qsdqsdqsdqsd8484'); i'm trying do do a function that concatenate the rows of the clob that gone be the result of a select , i mean without having to give multiple parameter about the name of table or such , i should only give here the column that contain the clobs , and she should handle the rest!. CREATE OR REPLACE function assemble_clob(q varchar2) return clob is v_clob clob; tmp_lob clob; hold VARCHAR2(4000); --cursor c2 is select xmlcontent from V_EXAMPLE_L where id=17852 cur sys_refcursor; begin OPEN cur FOR q; LOOP FETCH cur INTO tmp_lob; EXIT WHEN cur%NOTFOUND; --v_clob := v_clob || XMLTYPE.getClobVal(tmp_lob.xmlcontent); v_clob := v_clob || tmp_lob; END LOOP; return (v_clob); --return (dbms_xmlquery.getXml( dbms_xmlquery.set_context("Select 1 from dual")) ) end assemble_clob; the function is broken ... (if anybody could give me a help, thanks a lot, and i'm noob in sql so ....). and thanks

    Read the article

  • How do I declare an IStream in idl so visual studio maps it to s.w.interop.comtypes?

    - by Grahame Grieve
    hi I have a COM object that takes needs to take a stream from a C# client and processes it. It would appear that I should use IStream. So I write my idl like below. Then I use MIDL to compile to a tlb, and compile up my solution, register it, and then add a reference to my library to a C# project. Visual Studio creates an IStream definition in my own library. How can I stop it from doing that, and get it to use the COMTypes IStream? It seems there would be one of 3 answers: add some import to the idl so it doesn't redeclare IStream (importing MSCOREE does that, but doesn't solve the C# problem) somehow alias the IStream in visual studio - but I don't see how to do this. All my thinking i s completely wrong and I shouldn't be using IStream at all help...thanks [ uuid(3AC11584-7F6A-493A-9C90-588560DF8769), version(1.0), ] library TestLibrary { importlib("stdole2.tlb"); [ uuid(09FF25EC-6A21-423B-A5FD-BCB691F93C0C), version(1.0), helpstring("Just for testing"), dual, nonextensible, oleautomation ] interface ITest: IDispatch { [id(0x00000006),helpstring("Testing stream")] HRESULT _stdcall LoadFromStream([in] IStream * stream, [out, retval] IMyTest ** ResultValue); }; [ uuid(CC2864E4-55BA-4057-8687-29153BE3E046), noncreatable, version(1.0) ] coclass HCTest { [default] interface ITest; }; };

    Read the article

  • SimpleJdbcCall ignoring JdbcTemplate fetch size

    - by user289429
    We are calling the pl/sql stored procedure through Spring SimpleJdbcCall, the fetchsize set on the JdbcTemplate is being ignored by SimpleJdbcCall. The rowmapper resultset fetch size is set to 10 even though we have set the jdbctemplate fetchsize to 200. Any idea why this happens and how to fix it? Have printed the fetchsize of resultset in the rowmapper in the below code snippet - Once it is 200 and other time it is 10 even though I use the same JdbcTemplate on both occassion. This direct execution through jdbctemplate returns fetchsize of 200 in the row mapper jdbcTemplate = new JdbcTemplate(ds); jdbcTemplate.setResultsMapCaseInsensitive(true); jdbcTemplate.setFetchSize(200); List temp = jdbcTemplate.query("select 1 from dual", new ParameterizedRowMapper() { public Object mapRow(ResultSet resultSet, int i) throws SQLException { System.out.println("Direct template : " + resultSet.getFetchSize()); return new String(resultSet.getString(1)); } }); This execution through SimpleJdbcCall is always returning fetchsize of 10 in the rowmapper jdbcCall = new SimpleJdbcCall(jdbcTemplate).withSchemaName(schemaName) .withCatalogName(catalogName).withProcedureName(functionName); jdbcCall.returningResultSet((String) outItValues.next(), new ParameterizedRowMapper<Map<String, Object>>() { public Map<String, Object> mapRow(ResultSet rs, int row) throws SQLException { System.out.println("Through simplejdbccall " + rs.getFetchSize()); return extractRS(rs, row); } }); outputList = (List<Map<String, Object>>) jdbcCall.executeObject(List.class, inParam);

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • Frame Accurate Browser Launchable Video Player ... ?

    - by cliftonc
    I have a requirement where I need to enable playback (full screen) of a h.264 MPEG4 (thanks for the correction!) video from a local network, launchable from a browser link on a Windows workstation, and be frame accurate. By frame accurate I mean that I need to be able to scrub through the video in the same way you would with a vtr, stop at a frame, and then move backwards and forwards frame by frame (it is for a very specific compliance requirement where have to be able to check every frame if there is something that is potentially against broadcasting guidelines). The application itself is used to capture notes while viewing the material, so the end model is for a dual monitor workstation, with a web form in one, the video playing full screen in the second (no issue launching the video and manually having to move it to the second screen), and then the user controls the video via keyboard shortcuts or a jog shuttle. I have looked at QT, but the java bindings seem to be dead or nearly so, flash isn't frame accurate, VLC given its streaming heritage seems to be only able to move forward by a frame and not backwards, and all I have left are commercial offerings that in my experience are difficult and expensive to change. Any ideas of where I should look or alternative options? Any advice appreciated!

    Read the article

  • MySQL get list of unique items in a SET

    - by The Disintegrator
    I have a products table with a column of type SET (called especialidad), with these possible values. [0] => NADA [1] => Freestyle / BMX [2] => Street / Dirt [3] => XC / Rural Bike [4] => All Mountain [5] => Freeride / Downhill / Dual / 4x [6] => Ruta / Triathlon / Pista [7] => Comfort / City / Paseo [8] => Kids [9] => Playera / Chopper / Custom [10] => MTB recreacion [11] => Spinning / Fitness Any given product can have one or many of these i/e "Freestyle / BMX,Street / Dirt" Given a subset of the rows, I need to get a list of all the present "especialidad" values. But I need a list to be exploded and unique Article1: "Freestyle / BMX,Street / Dirt" Article2: "Street / Dirt,Kids" Article2: "Kids" Article4: "Street / Dirt,All Mountain" Article5: "Street / Dirt" I need a list like this Freestyle / BMX Street / Dirt" Kids" All Mountain" I tried with group_concat(UNIQUE) but I get a list of the permutations...

    Read the article

  • Ideal Multi-Developer Lamp Stack?

    - by devians
    I would like to build an 'ideal' lamp development stack. Dual Server (Virtualised, ESX) Apache / PHP on one, Databases (MySQL, PgSQL, etc) on the other. User (Developer) Manageable mini environments, or instance. Each developer instance shares the top level config (available modules and default config etc) A developer should have control over their apache and php version for each project. A developer might be able to change minor settings, ie magicquotes on for legacy code. Each project would determine its database provider in its code The idea is that it is one administrate-able server that I can control, and provide globally configured things like APC, Memcached, XDebug etc. Then by moving into subsets for each project, i can allow my users to quickly control their environments for various projects. Essentially I'm proposing the typical system of a developer running their own stack on their own machine, but centralised. In this way I'd hope to avoid problems like Cross OS code problems, database inconsistencies, slightly different installs producing bugs etc. I'm happy to manage this in custom builds from source, but if at all possible it would be great to have a large portion of it managed with some sort of package management. We typically use CentOS, so yum? Has anyone ever built anything like this before? Is there something turnkey that is similar to what I have described? Are there any useful guides I should be reading in order to build something like this?

    Read the article

  • Programming Technique: How to create a simple card game

    - by Shyam
    Hi, As I am learning the Ruby language, I am getting closer to actual programming. So I was thinking of creating a simple card game. My question isn't Ruby orientated, but I do know want to learn how to solve this problem with a genuine OOP approach. In my card game I want to have four players. Using a standard deck with 52 cards, no jokers/wildcards. In the game I won't use the Ace as a dual card, it is always the highest card. So, the programming problems I wonder about are the following: How can I sort/randomize the deck of cards? There are four types, each having 13 values. Eventually there can be only unique values, so picking random values could generate duplicates. How can I implement a simple AI? As there are tons of card games, someone would have figured this part out already, so references would be great. I am a truly Ruby nuby, and my goal here is to learn to solve problems, so pseudo code would be great, just to understand how to solve the problem programmatically. I apologize for my grammar and writing style if it's unclear, for it is not my native language. Also pointers to sites where such challenges are explained, would be a great resource! Thank you for your comments, answers and feedback!

    Read the article

  • Always get exception when trying to Fill data to DataTable

    - by Sambath
    The code below is just a test to connect to an Oracle database and fill data to a DataTable. After executing the statement da.Fill(dt);, I always get the exception "Exception of type 'System.OutOfMemoryException' was thrown.". Has anyone met this kind of error? My project is running on VS 2005, and my Oracle database version is 11g. My computer is using Windows Vista. If I copy this code to run on Windows XP, it works fine. Thank you. using System.Data; using Oracle.DataAccess.Client; ... string cnString = "data source=net_service_name; user id=username; password=xxx;"; OracleDataAdapter da = new OracleDataAdapter("select 1 from dual", cnString); try { DataTable dt = new DataTable(); da.Fill(dt); // Got error here Console.Write(dt.Rows.Count.ToString()); } catch (Exception e) { Console.Write(e.Message); // Exception of type 'System.OutOfMemoryException' was thrown. } Update I have no idea what happens to my computer. I just reinstall Oracle 11g, and then my code works normally.

    Read the article

  • F#: Tell me what I'm missing about using Async.Parallel

    - by JBristow
    ok, so I'm doing ProjectEuler Problem #14, and I'm fiddling around with optimizations in order to feel f# out. in the following code: let evenrule n = n / 2L let oddrule n = 3L * n + 1L let applyRule n = if n % 2L = 0L then evenrule n else oddrule n let runRules n = let rec loop a final = if a = 1L then final else loop (applyRule a) (final + 1L) n, loop (int64 n) 1L let testlist = seq {for i in 3 .. 2 .. 1000000 do yield i } let getAns sq = sq |> Seq.head let seqfil (a,acc) (b,curr) = if acc = curr then (a,acc) else if acc < curr then (b,curr) else (a,acc) let pmap f l = seq { for a in l do yield async {return f a} } |> Seq.map Async.RunSynchronously let pmap2 f l = seq { for a in l do yield async {return f a} } |> Async.Parallel |> Async.RunSynchronously let procseq f l = l |> f runRules |> Seq.reduce seqfil |> fst let timer = System.Diagnostics.Stopwatch() timer.Start() let ans1 = testlist |> procseq Seq.map // 837799 00:00:08.6251990 printfn "%A\t%A" ans1 timer.Elapsed timer.Reset() timer.Start() let ans2 = testlist |> procseq pmap printfn "%A\t%A" ans2 timer.Elapsed // 837799 00:00:12.3010250 timer.Reset() timer.Start() let ans3 = testlist |> procseq pmap2 printfn "%A\t%A" ans3 timer.Elapsed // 837799 00:00:58.2413990 timer.Reset() Why does the Async.Parallel code run REALLY slow in comparison to the straight up map? I know I shouldn't see that much of an effect, since I'm only on a dual core mac. Please note that I do NOT want help solving problem #14, I just want to know what's up with my parallel code.

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >