Search Results

Search found 641 results on 26 pages for 'osb ha'.

Page 21/26 | < Previous Page | 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • vDS - vCenter Problem

    - by rbmadison
    We are implementing a vSphere farm and are using a distrubuted switch. The VC is a VM within the farm connected to the distrubuted switch. We had a SAN issue and all of our VMs were down. When the SAN recovered and we restarted the ESX host containing the VC the VC couldn't connect to the network through the vDS. We had to remove a NIC from the vDS on that host and create a regular vswitch and then connect the VC to that before the VC would connect to the network. Is this typical behavior? If the VC goes down does all vDS networking stop on all the hosts? That seems to be a very bad thing. I thought networking would work even though the VC is down because the hosts have the vDS configuration cached. Is there a better way to configure it to prevent this from happening. We want to keep the VC as a VM for HA and recoverabilty purposes. Can anyone offer suggestions or explanations? I appreciate the help. Thanks, Rick

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • Edit-text-files-over-SSH using a local text editor

    - by Mikko Ohtamaa
    I am working in various Linux and UNIX environments. I'd like to elegantly solve the problem of editing remote configuration files over SSH. Instead of using terminal editors (nano), I'd like to open the file in a local text editor on my desktop (Sublime Text 2). CyberDuck, WinSCP and various other SFTP apps can do this. Using editors over X11 forwarding has also proven to be problematic. Also using archaic text editors like Vim or Emacs do not serve my needs well. They could do this, but I prefer using other text editing software. Using ssh mounts (FUSE) are also problematic unless they can happen on the demand and triggered by the remote site. So what I hope to achieve Have a somekind of easily deployable shell script etc. which I can copy to remote server (let's call it mooedit) I run mooedit command on the remote server of which I have connected over SSH connection mooedit sends some kind of signal (over SSH( to my local desktop On my local desktop this signal is captured and it determines 'a ha! moo wants to edit a file on server X in folder Y' File is SFTP transfered to the local desktop (/tmp) File is opened in a nice GUI text editor on the local desktop When Save is pressed, the local desktop notices changes in the file and SFTP sends the resulting file back to the server The question is: What signaling mechanisms SSH provides for this? Any other methods to trigger a local text editor for remote SSH file?

    Read the article

  • Fully FOSS EMail solution

    - by Ravi
    I am looking at various FOSS options to build a robust EMail solution for a government funded university. Commercial options are to be chosen only in the worst case scenario. Here are the requirements: Approx 1000-1500 users - Postfix or Exim? (Sendmail is out;-)) Mailing lists for different groups/Need web based archive - Mailman? Sympa? Centralised identity store - OpenLDAP? Fedora 389DS? Secure IMAP only - no POP3 required - Courier? Dovecot? Cyrus?? Anti Spam - SpamAssasin? what else? Calendaring - ?? webmail - good to have, not mandatory - needs to be very secure...so squirrelmail is out;-)? Other questions: What mailbox storage format to use? where to store? database/file system? Simple and effective HA options? Is there a web proxy equivalent to squid in the mail server world? software load balancers?CARP? Monitoring and alert? Backup? The govt wants to stimulate the local economy by buying hardware locally from whitebox vendors. Also local consultants and university students will do the integration. We looked at out-of-the-box integrated solutions like Axigen, Zimbra and GMail but each was ruled out in favour of a DIY approach in the hopes of full control over the data and avoiding vendor lockin - which i though was a smart thing to do. I wish more provincial governments in the developing world think of these sort of initiatives As for OS - Debian, FreeBSD would be first preference. Commercial OS's need not apply. CentOS as second tier option...

    Read the article

  • What are the most important aspects to consider when choosing a SAN for a small office virtualizatio

    - by Prof. Moriarty
    I am in the process of consolidating 6 physical servers running 6 different operating system flavors (don't ask) into two identical physical servers (Dell PowerEdge 2900), using the free VMware ESXi 4.0 platform. We will install an iSCSI SAN over a 1GbE network, and store all virtual machine images on the SAN. Each physical server would run 3 VMs, and in the case of a physical server failure, we would manually switch over the other 3. These are all internal servers, while important, they can tolerate some amount of downtime (say <1h) to keep cost and complexity associated with HA down. I now need to choose the SAN to be used for the setup, on a low budget. We currently have about 2TB of data, but of course I want to able to grow, do backups of VM snapshots on other drives and remove them to a different location, etc. So what I would like to know is: Which are the must have features for this setup, without which using a SAN is not worth it? We are mostly a Dell shop, so I have been looking at the EqualLogic PS4000E High Availability model. Any opinions, anecdotes, bad experiences with this model? (This is one of the few models which could accomodate our existing disks from the physical servers.) If you can recommend something that is not Dell, but it has better value, I would most definitely consider it. Caveats, things to look out for?

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Heartbeat/DRBD failover didn't work as expected. How do I make the failover more robust?

    - by Quinn Murphy
    I had a scenario where a DRBD-heartbeat set up had a failed node but did not failover. What happened was the primary node had locked up, but didn't go down directly (it was inaccessible via ssh or with the nfs mount, but it could be pinged). The desired behavior would have been to detect this and failover to the secondary node, but it appears that since the primary didn't go full down (there is a dedicated network connection from server to server), heartbeat's detection mechanism didn't pick up on that and therefore didn't failover. Has anyone seen this? Is there something that I need to configure to have more robust cluster failover? DRBD seems to otherwise work fine (had to resync when I rebooted the old primary), but without good failover, it's use is limited. heartbeat 3.0.4 drbd84 RHEL 6.1 We are not using Pacemaker nfs03 is the primary server in this setup, and nfs01 is the secondary. ha.cf # Hearbeat Logging logfacility daemon udpport 694 ucast eth0 192.168.10.47 ucast eth0 192.168.10.42 # Cluster members node nfs01.openair.com node nfs03.openair.com # Hearbeat communication timing. # Sets the triggers and pulse time for swapping over. keepalive 1 warntime 10 deadtime 30 initdead 120 #fail back automatically auto_failback on and here is the haresources file: nfs03.openair.com IPaddr::192.168.10.50/255.255.255.0/eth0 drbddisk::data Filesystem::/dev/drbd0::/data::ext4 nfs nfslock

    Read the article

  • Turn computer into DAS (Direct Attached Storage)

    - by Damon
    Can we build a direct attached storage by taking a computer/server, adding an HBA, and installing some appropriate software? We would use Debian as a host OS for both the DAS and the server. If so, what software do we use? And do we simply need a HBA for the DAS and the Server? Or do we need more hardware? The goal is to use an older server that does not have enough room for drives but does have ECC memory, server processors, redundant power supplies, dual nics, etc. Then find any boxes, server or not, the key being having enough room for 8-12 drives, fans, etc. and turning them into a DAS; build two of these DAS's and have them connected to the server. Eventually we want to have two servers using DRBD and associated services like heartbeat and pace maker to create an HA setup for our server(s) but that will take a long time to configure since I have no experience with anything related to DRBD (yet) and have a learning curve I have to get past, not to mention the additional cost of more hardware (two servers vs one).

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • DRBD as a block device for XEN VM (Centos 5.3)

    - by SaberTooth
    Hi all, I have setup a drbd resource between 2 server nodes - everything works correctly when doing sync tests between the two. (I want to create a HA cluster using drbd,xen and heartbeat) However, when I try and create a XEN VM with Centos as guest operating system, I get through to the partitioning screen on the install but when I select a partitioning type the next screen gives me the following error : "An error has occurred - no valid devices were found on which to create new file systems. Please check your hardware for the cause of this problem." This is the first time attempting create a setup like this and searching Google does not help much... my config files for DRBD and XEN.... DRBD (just the section that is pertinent) on xennode0 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; flexible-meta-disk internal; } on xennode1 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; meta-disk internal; } XEN kernel = "/boot/xeninstall/vmlinuz" ramdisk = "/boot/xeninstall/initrd.img" extra = "text" name = "VM" maxmem = 3000 memory = 3000 vcpus = 4 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "phy:/dev/drbd0,sda1,w", "tap:aio:/srv/xen/xenswap.img,sda2,w" ] vif = [ "mac=00:16:3e:11:67:ae,bridge=xenbr0" ] root = "/dev/sda1 ro" Thanks in advance!

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • ImageMagick installation confusion

    - by Codemonkey
    Well this is turning out to be a pain. I'm on CentOS 6.5. I saw on the PECL ImageMagick changelog that they added a load of constants for new filters, such as LANCZOS2SHARP. Some testing I did earlier suggested that PhotoShop 6.5 was able to downsize photos with better clarity than my current ImageMagick's best effort of LANCZOS, so I thought I'd try to upgrade to test out the newer filters. So, first port of call - get source from pecl for 3.2.0RC1. Installed with no problems. But, ah-ha. Although it says it only requires IM 6.2.4, the filters I'm after don't work unless you have 6.6.6+ So I go http://www.imagemagick.org/script/install-source.php to install the newest version. This is the bit that's puzzled me. It all appears to have installed fined. The tests work fine, passing 40 out of 40. If I run identify -version on the command line it outputs 6.8.9 But if I echo Imagick::getVersion() in PHP it shows 6.5.4, even after restarting php-fpm. rpm -qa | grep ImageMagick shows that I still have 6.5.4 installed locate ImageMagic also only seems to show the 6.5.4 one I feel that the missing link here is ImageMagick-devel, do I need to install that too? How do I go about doing that? Or do I just need to reinstall the pecl-imagemagick 3.2.0RC1 now that I have the latest IM installed?

    Read the article

  • Did you miss the OFM Summer Camps III? Get access to the b2b & adapters and SOA Governance training material

    - by JuergenKress
    We posted the SOA Governance and b2b & adapters training material at our SOA Community Workspace (SOA Community membership required). We have no plans to post the ACM and Advanced SOA training material. Special thanks to all the trainers who delivered superb workshops. Thanks to all the partners who invested time and utilization plus travel expenses to attend the camp. Special thanks to all the international partners who traveled a long way to sunny Lisbon – including our Mexican friends! The Summer Camp feedback was excellent, everybody answered the question if he would attend a future OFM Summer Camp with YES and the overall feedback is 4,79 out of 5 (best)! For most of the trainings we had a waiting list with additional partners who want to attend. Make sure you use your middleware skills to deliver successful projects. It would be great if you can support your colegues and the community by sharing the lessons learned and best practice. Thanks for great feedback at twitter please continue to send your pictures to our twitter feed @soacommunity #OFMsummercamps or post them at our Facebook page. Here is a selection of some tweets: Walter Montantes ?México presence en #OFMSummerCamp Lisboa 2013 cc @soacommunity @AdquemTI pic.twitter.com/9NEFwsWCAq SOA Community ?thanks for attending the #OFMSummercamp - save trip home ;-) Want to attend a future training register http://www.oracle.com/goto/emea/soa #soacommunity C2B2 Consulting ?Last day at the #OFMSummercamp Oracle SOA Suite Training in Portugal @soacommunity pic.twitter.com/6LZavVlvHc Patrick Sinke ?a FollowFriday for @Oracle_B2B because 19 followers is not enough #FF #OFMSummercamp Patrick Sinke ?Yogesh Sontakke is talking about #SOA #Governance. #OFMSummercamp Nuno Cancelo ?Oracle SOA Governance - Quick Overview #OFMSummerCamp Nuno Cancelo ?Last coffee break. #OFMSummercamp pic.twitter.com/xZi9M5vAWz Scott Haaland Last day of #OFMSummercamp. It's been a great productive week..great students eager to learn. @Oracle_B2B @soacommunity . Patrick Sinke ?singletons are used to retain specific fetching order of files and records in multithread/multi-instance environment. #OFMSummercamp #SOA Patrick Sinke ?SOA's File Adapter is extremely versatile: It writes, reads and converts almost any type of file. #OFMSummercamp pic.twitter.com/XjtJF9Y5SH Patrick Sinke ?Now deep-diving into Java EE Connector Architecture (JCA). Got to do some catching up at home on this subject. #OFMSummercamp #SOA Patrick Sinke ?Today we start with security and OPSS at #OFMSummercamp Advanced #SOA training. Then some #OSB. #OFM #Oracle #whitehorses Remco Cats ?Starting the last day on #OFMSummercamp building ADF Mobile applications with BPM Nuno Cancelo ?While attending #OFMSummerCamp i notice even more the importance of designing software. Any tips in how to become an software architect? Patrick Sinke ?Extensive information on Faullt handling and policies now in Advanced #SOA track. #OFMSummercamp #oraclesoa #middleware #whitehorses C2B2 Consulting ?Geoffroy de Lamalle speaking at the #OFMSummercamp @soacommunity pic.twitter.com/m4oOyzYB2q Patrick Sinke ?Oracle Document editor is a huge tool (6GB), but contains every version and subset of EDI, HL7, etc definitions. Impressive. #OFMSummercamp Patrick Sinke ?Oracle #B2B 11g presentation on #OFMSummercamp by Scott. Unfortunately only 2 hours in SOA advanced class. Very interesting. SOA Community Bon dia #OFMSummercamp - if you are here in sunny Lisbon ;-) you can checkin at http://foursquare.com/ #soacommunity pic.twitter.com/PnmudJgJTZ Nuno Cancelo ?Beautiful day! #OFMSummercamp pic.twitter.com/nwByRM5YE1 Nuno Cancelo ?Relaxing after lunch :-) #OFMSummercamp pic.twitter.com/hOJzebCM5p SOA Community Posted pictures from OFM Summer Camp III at our facebook page - share yours! https://www.facebook.com/soacommunity #OFMSummerCamp #soacommunity Nuno Cancelo ?Coffee break: day3 #OFMSummercamp pic.twitter.com/97n1sAGhx4 Patrick Sinke #OFMSummercamp day 3; SOA Infrastructure. pic.twitter.com/ziivyw3L6q SOA Community ?@soacommunity 28 Aug Bon dia day 3 at #OFMSummercamp in Lisboa. Nial presenting ACM roadmap pic.twitter.com/iN3gTCHSbA SOA Community ?Hands-on time at the b2b & adapters training part of the #OFMSummercamp #soacommunity pic.twitter.com/9BzI7igrX8 SOA Community ?Laptop replacement at #OFMSummercamp - big thanks to Oracle Portugal for the fast help! 10 seconds to cut the cable pic.twitter.com/nwd2Px73pa SOA Community ?Hard work long training until 18.00 now enjoy the beach #ofmsummercamp #soacommunity pic.twitter.com/StogfxJNFH Walter Montantes? Primer día #OFMSummercamp pic.twitter.com/cTNDpzg5pL Miguel Delgadillo ?@walex86 Advanced SOA training by Geoffroy at #OFMSummercamp - full room hard working class pic.twitter.com/2SDz9FVhkh” si le sabes? SOA Community ?Welcome to the #OFMSummercamp in sunny Lisbon ;-) Send us your pictures of the training and city @soacommunity pic.twitter.com/i2ErZaaFbb SOA Community ?Advanced SOA training by Geoffroy at #OFMSummercamp - full room hard working class pic.twitter.com/uKjv0tV2bO Nuno Cancelo #OFMSummercamp afternoon break:) pic.twitter.com/pUaBvt2NIj Impressions of the event are posted at our facebook page. If you missed Lisbon, make sure you attend one of our Additional Middleware Trainings in Europe: We currently run 3 different training roadshows for Business Process Management & ADF & WebLogic across Europe make sure you sing-up for them: ADF & ADF Mobile or Business Process Management Suite or WebLogic Suite. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: b2b,training,education,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • trying to use mod_proxy with httpd and tomcat

    - by techsjs2012
    I been trying to use mod_proxy with httpd and tomcat... I have on VirtualBox running Scientific Linux which has httpd and tomcat 6 on it.. I made two nodes of tomcat6. I followed this guide like 10 times and still cant get the 2nd node of tomcat working.. http://www.richardnichols.net/2010/08/5-minute-guide-clustering-apache-tomcat/ Here is the lines from my http.conf file <Proxy balancer://testcluster stickysession=JSESSIONID> BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node1 loadfactor=1 BalancerMember ajp://127.0.0.1:8109 min=10 max=100 route=node2 loadfactor=1 </Proxy> ProxyPass /examples balancer://testcluster/examples <Location /balancer-manager> SetHandler balancer-manager AuthType Basic AuthName "Balancer Manager" AuthUserFile "/etc/httpd/conf/.htpasswd" Require valid-user </Location> Now here is my server.xml from node1 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> now here is the server.xml file from node2 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8105" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node2"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> I dont know what it is. but I been trying for days

    Read the article

  • Is Berkeley DB XML a viable database backend?

    - by w00t
    Apparently, BDB-XML has been around since at least 2003 but I only recently stumbled upon it on Oracle's website: Berkeley DB XML. Here's the blurb: Oracle Berkeley DB XML is an open source, embeddable XML database with XQuery-based access to documents stored in containers and indexed based on their content. Oracle Berkeley DB XML is built on top of Oracle Berkeley DB and inherits its rich features and attributes. Like Oracle Berkeley DB, it runs in process with the application with no need for human administration. Oracle Berkeley DB XML adds a document parser, XML indexer and XQuery engine on top of Oracle Berkeley DB to enable the fastest, most efficient retrieval of data. To me it seems that the underlying ideas are technically sound and probably more mature than the newer document-based DBs like CouchDB or MongoDB. It has support for C, C++, Ruby and Perl, as far as I can determine. It even has HA-capabilities like automatic replication using a master/slave model with automatic election. However, I can't seem to find any projects that use it. Is there something fundamentally wrong with it? Is the license too onerous? Is it too complicated? Why is it not being used?

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • (C#) Get index of current foreach iteration

    - by Graphain
    Hi, Is there some rare language construct I haven't encountered (like the few I've learned recently, some on Stack Overflow) in C# to get a value representing the current iteration of a foreach loop? For instance, I currently do something like this depending on the circumstances: int i=0; foreach (Object o in collection) { ... i++; } Answers: @bryansh: I am setting the class of an element in a view page based on the position in the list. I guess I could add a method that gets the CSSClass for the Objects I am iterating through but that almost feels like a violation of the interface of that class. @Brad Wilson: I really like that - I've often thought about something like that when using the ternary operator but never really given it enough thought. As a bit of food for thought it would be nice if you could do something similar to somehow add (generically to all IEnumerable objects) a handle on the enumerator to increment the value that an extension method returns i.e. inject a method into the IEnumerable interface that returns an iterationindex. Of course this would be blatant hacks and witchcraft... Cool though... @crucible: Awesome I totally forgot to check the LINQ methods. Hmm appears to be a terrible library implementation though. I don't see why people are downvoting you though. You'd expect the method to either use some sort of HashTable of indices or even another SQL call, not an O(N) iteration... (@Jonathan Holland yes you are right, expecting SQL was wrong) @Joseph Daigle: The difficulty is that I assume the foreach casting/retrieval is optimised more than my own code would be. @Jonathan Holland: Ah, cheers for explaining how it works and ha at firing someone for using it.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26  | Next Page >