Search Results

Search found 4431 results on 178 pages for 'peoplesoft tree powerconnect'.

Page 61/178 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • please demystify the 10Gb ethernet interfaces, cables

    - by maruti
    this really is a Dell question but tempted to ask the experts @ serverfault. choosen a Dell powerconnect 8024 10GbE switch. per the spec sheet this has 10GbaseT ports. "24x 10GBASE-T (10Gb/1Gb/100Mb) with 4x Combo Ports of SFP+ (10Gb/1Gb) or 10GBASE-T" the HBA on my storage server has 10G CX4 copper ports Dell does not sell any cables and this adds to my confusion. from the picture Dell 8024 seems to have RJ-45 type ports on the front panel? my question: is it a RJ-45 + CX4 cable or CX4 + CX4 cable?

    Read the article

  • Rough estimate for speed advantage of SAN-via-fibre to san-via-iSCSI when using VMware vSphere

    - by Dirk Paessler
    We are in the process of setting up two virtualization servers (DELL R710, Dual Quadcore Xeon CPUs at 2.3 Ghz, 48 GB RAM) for VMware VSphere with storage on a SAN (DELL Powervault MD3000i, 10x 500 GB SAS drives, RAID 5) which will be attached via iSCSI on a Gbit Ethernet Switch (DELL Powerconnect 5424, they call it "iSCSI-optimized"). Can anyone give an estimate how much faster a fiber channel based solution would be (or better "feel")? I don't mean the nominal speed advantage, I mean how much faster will virtual machines effectively work? Are we talking twice the speed, five times, 10 times faster? Does it justify the price? PS: We are not talking about heavily used database servers or exchange servers. Most of the virtualized servers run below 3-5% average CPU load.

    Read the article

  • Basic questions about network topologies

    - by laoshanlung
    I have just started learning about network topologies, but there are a lot of confusion about different types of network topologies i have learnt so far. First of all, BUS topology. If i have like 100 PCs in the same wire connected using BUS topology, and the network connection speed is 100Mbps, then each PC will have a connection of 1Mbps, right ? With the same scenario, if i connect those 100 PCs using STAR topology, then each PC will have a connection of 100Mbps ? Then with the TREE topology, i divide the system into 10 sub-system (10 tree branches) , each branch has 10 PCs, then i will have other 10 small "BUS-topology" networks each one will have a connection of 10Mbps and therefore each PC will also have 10Mbps ? And the last one is RING topology, 100 PCs, each PC will have 100Mbps connection ?

    Read the article

  • LAN speeds and firewall/switch connections

    - by microchasm
    I have a small network with about ten users. All workstations flow into a Dell PowerConnect 3424 which then has a single link to a SonicWALL firewall and from there to a cable modem. More important than internet connectivity is speed between machines (specifically a Windows Server box on the LAN which everyone uses simultaneously). I believe the 3424 has gigabit connections, but they look like they're for stacking. Is there a way to test the speeds on the LAN to see where the speeds are at? Is there any low-hanging fruit insofar as increasing speeds?

    Read the article

  • add the same qtreewidgetitems into the second column

    - by srinu
    hello i am using the following program to display the qtreewidget. main.cpp include include "qdomsimple.h" include include "qdomsimple.h" int main(int argc, char *argv[]) { QApplication a(argc, argv); QStringList filelist; filelist.push_back("C:\department1.xml"); filelist.push_back("C:\department2.xml"); filelist.push_back("C:\department3.xml"); QDOMSimple w(filelist); w.resize(260,200); w.show(); return a.exec(); } qdomsimple.cpp include "qdomsimple.h" include include QDOMSimple::QDOMSimple(QStringList strlst,QWidget *parent) : QWidget(parent) { k=0; // DOM document QDomDocument doc("title"); QStringList headerlabels; headerlabels.push_back("Chemistry"); headerlabels.push_back("Mechanical"); headerlabels.push_back("IT"); m_tree = new QTreeWidget( this ); m_tree->setColumnCount(3); m_tree->setHeaderLabels(headerlabels); QStringList::iterator it; for(it=strlst.begin();it<strlst.end();it++) { QFile file(*it); if ( file.open( QIODevice::ReadOnly | QIODevice::Text )) { // The tree view to be filled with xml data // (m_tree is a class member variable) // Creating the DOM tree doc.setContent( &file ); file.close(); // Root of the document QDomElement root = doc.documentElement(); // Taking the first child node of the root QDomNode child = root.firstChild(); // Setting the root as the header of the tree //QTreeWidgetItem* header = new QTreeWidgetItem; //header->setText(k,root.nodeName()); //m_tree->setHeaderItem(header); // Parse until the end of document while (!child.isNull()) { //Convert a DOM node to DOM element QDomElement element = child.toElement(); //Parse only if the node is a really an element if (!element.isNull()) { //Parse the element recursively parseElement( element,0); //Go to the next sibling child = child.nextSiblingElement(); } } //m_tree->setGeometry( QApplication::desktop()->availableGeometry() ); //setGeometry( QApplication::desktop()->availableGeometry() ); } k++; } } void QDOMSimple::parseElement( QDomElement& aElement, QTreeWidgetItem *aParentItem ) { // A map of all attributes of an element QDomNamedNodeMap attrMap = aElement.attributes(); // List all attributes QStringList attrList; for ( int i = 0; i < attrMap.count(); i++ ) { // Attribute name //QString attr = attrMap.item( i ).nodeName(); //attr.append( "-" ); /* Attribute value QString attr; attr.append( attrMap.item( i ).nodeValue() );*/ //attrList.append( attr ); attrList.append(attrMap.item( i).nodeValue()); } QTreeWidgetItem* item; // Create a new view item for elements having child nodes if (aParentItem) { item = new QTreeWidgetItem(aParentItem); } // Create a new view item for elements without child nodes else { item = new QTreeWidgetItem( m_tree ); } //Set tag name and the text QString tagNText; tagNText.append( aElement.tagName() ); //tagNText.append( "------" ); //tagNText.append( aElement.text() ); item->setText(0, tagNText ); // Append attributes to the element node of the tree for ( int i = 0; i < attrList.count(); i++ ) { QTreeWidgetItem* attrItem = new QTreeWidgetItem( item ); attrItem->setText(0, attrList[i] ); } // Repeat the process recursively for child elements QDomElement child = aElement.firstChildElement(); while (!child.isNull()) { parseElement( child, item ); child = child.nextSiblingElement(); } } QDOMSimple::~QDOMSimple() { } for this i got the qtreewidget like this +file1 +file2 +file3 but actual wanted output is +file1 +file2 +file3 i don't know how to do it.Thanks in advance

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • FTP Sites vs Sites in IIS 7.0

    - by NealWalters
    We have one FTP site set up (and working) basically like the instructions here: http://www.iis.net/learn/publish/using-the-ftp-service/creating-a-new-ftp-site-in-iis-7 It shows up under "Sites" and then the name of our FTP Site. However, above "Sites" (in the left navigation tree view), we see a node called "FTP Sites". When we click on it, it says "FTP Management is provided by IIS 6.0". Can someone give me the big picture of why this node appears, and why IIS 6 is involved? Is is some backward compatible feature? I didn't build these machines, so don't know the reasoning of what was done before I arrived on the scene. Also, is the tree view icon for websites and FTP sites the same?

    Read the article

  • How long do managed gigabit ethernet switches take to boot up?

    - by Warren P
    One critical drawback that I have found in researching managed-switches, and one that I have some past experience with is that anything with "lots" of firmware is going to have lots of issues associated with that firmware. We are in the middle of researching rackmount gigabit switches (48 port). It looks like for 48 ports, our only choice is managed switches (Dell, Cisco/Linksys,HP, etc). What I want to know, that I can not find out much about is the boot-time for various managed switches. If you own one, can you please answer with the model number, and the cold boot time in seconds. I have read online that Linksys (now Cisco) SRW series sometimes take almost 5 minutes before they are fully booted up, and that is an unacceptable cost for us. I particularly want to know about Dell PowerConnect managed switch bootup time (model 3548 and 5448), and would like to confirm the 5-minute boot time on the SRW2048 or similar model, and any HP ProCurve boot up times. The composite of all those figures ought to form an interesting overall performance picture.

    Read the article

  • How do I create an ISO image from a directory structure on CentOS?

    - by tom smith
    I'm trying to figure out the exact mkisofs cmd to create the ISO with the following directory and file structure. I've tried different commands, but when I mount the ISO that is created the directory tree has not been reproduced. The initial directory tree is: master.iso:: mount -o loop /apps/vmware/master.iso /mnt/vmtest ls /mnt/vmtest isolinux ks.cfg upgra32 upgra64 upgrade.sh ls /mnt/vmtest/isolinux boot.cat initrd.img isolinux.bin isolinux.cfg vmlinuz I've used different variations of the following mkisofs command without success: mkisofs -o '/foo/test.iso' -b 'isolinux.bin' -c 'boot.cat' -no-emul-boot -boot-load-size 4 -boot-info-table 'isolinux' How do I make an ISO that captures a directory's exact structure?

    Read the article

  • Failure to copy files with ownership/ACL information on a Windows Server 2008 R2 machine

    - by darklion
    I'm attempting to copy a directory tree, maintaining its ownership information using the command: XCOPY S:\ProjectsDefault\Tempalte\admin S:\Projects\00\111\admin /S /E /I /O the command gives an Access denied error message, and while it does create the directory tree, the ownership and ACL information is not copied. This is being done on a Windows 2008 R2 Server which has mounted a share from a Windows 2003 R2 domain controller. The user has been been granted full access to the share and is a member of the Domain Admins security group. Oddly enough, the command does work if performed on a different (Windows 2003 R2 Server). (It also works if done using the Domain Administrator account on the 2008 server.)

    Read the article

  • Can I change the system's "Browse for Folder" dialog globally?

    - by Chris Phillips
    As far as I know, everyone hates the "Browse for Folder" dialog: This dialog is always too small, rarely remembers locations well, and worst of all: forces you to navigate your entire computer using a tedious tree structure. Now, to be fair, some of the problems are likely to do with how apps are invoking the control -- not setting a size or a default directory, etc. But the problem about the tedious tree control remains. Is there any way to customize your Windows installation to use a different control? Preferably an app/installer that does it for you safely, but dropping in a compatible DLL or similar technique would be okay too. Or are we stuck with this terrible control forever?

    Read the article

  • how to design network for connectivity between private and corporate LANs?

    - by maruti
    there is a bunch of servers connected to shared storage in a private LAN (10.x.x.x). this privateLAN is managed by a windows server (DHCP, DNS and directory services) these hosts need to be from outside of the datacenter Eg. Remote desktop. can the NIC2 on each of the hosts be connected to the other public LAN (compromising speed or security? what are improtant considerations: additional hardware? like switches? routing&DNS software? currently available hardware : Dell Powerconnect 6224 switch .... planning this for storage network. software: windows 2003 server for DHCP, DNS, A/D ? would it be more flexible to use Linux distributions like IPCOP, Untangle etc? all that I am looking for is good isolation between private and other networks, avoid DHCP, DNS, AD clashes.

    Read the article

  • Is asp.net caching my sql results?

    - by Christian W
    I have the following method in an App_Code/Globals.cs file: public static XmlDataSource getXmlSourceFromOrgid(int orgid) { XmlDataSource xds = new XmlDataSource(); var ctx = new SensusDataContext(); SqlConnection c = new SqlConnection(ctx.Connection.ConnectionString); c.Open(); SqlCommand cmd = new SqlCommand(String.Format("select orgid, tekst, dbo.GetOrgTreeXML({0}) as Subtree from tblOrg where OrgID = {0}", orgid), c); var rdr = cmd.ExecuteReader(); rdr.Read(); StringBuilder sb = new StringBuilder(); sb.AppendFormat("&lt;node orgid=\"{0}\" tekst=\"{1}\"&gt;",rdr.GetInt32(0),rdr.GetString(1)); sb.Append(rdr.GetString(2)); sb.Append("&lt;/node&gt;"); xds.Data = sb.ToString(); xds.ID = "treedata"; rdr.Close(); c.Close(); return xds; } This gives me an XML-structure to use with the asp.net treeview control (I also use the CssFriendly extender to get nicer code) My problem is that if I logon on my pc with a code that gives me access on a lower level in the tree hierarchy (it's an orgianization hierarchy), it somehow "remembers" what level i logon at. So when my coworker tests from her computer with another code, giving access to another place in the tree, she get's the same tree as me. (The tree is supposed to show your own level and down.) I have added a html-comment to show what orgid it passes to the function, and the orgid passed is correct. So either the treeview caches something serverside, or the sqlquery caches it's result somehow... Any ideas? Sql function: ALTER function [dbo].[GetOrgTreeXML](@orgid int) returns XML begin RETURN (select org.orgid as '@orgid', org.tekst as '@tekst', [dbo].GetOrgTreeXML(org.orgid) from tblOrg org where (@orgid is null and Eier is null) or Eier=@orgid for XML PATH('NODE'), TYPE) end Extra code as requested: int orgid = int.Parse(Session["org"].ToString()); string orgname = context.Orgs.Where(q => q.OrgID == orgid).First().Tekst; debuglit.Text = String.Format("<!-- Id: {0} \n name: {1} -->", orgid, orgname); var orgxml = Globals.getXmlSourceFromOrgid(orgid); tvNavtree.DataSource = orgxml; tvNavtree.DataBind(); Where "debuglit" is a asp:Literal in the aspx file. EDIT: I have narrowed it down. All functions returns correct values. It just doesn't bind to it. I suspect the CssFriendly adapter to have something to do with it. I disabled the CssFriendly adapter and the problem persists... Stepping through it in debug it's correct all the way, with the stepper standing on "tvNavtree.DataBind();" I can hover the pointer over the tvNavtree.Datasource and see that it actually has the correct data. So something must be faulting in the binding process...

    Read the article

  • How come transparency in textures appears as white in 3ds Max?

    - by rFactor
    I have downloaded a free tree palm model that came with textures and a preview image. In the preview image the tree looks fine, but when I have deployed the textures to my scene, the leaves look green plus white, where white is the transparency area. Is there something I need to know about transparent textures? Both in the view-port and in the renderer all transparency appears as white. What could it be? Edit: The model I was talking about is implemented with two JPGs. One is textured and the other one is black-white where white represents transparency and is applied to the material in the opacity channel/map. The transparency seems to work somewhat, but there are white borders around the leaves. I think it's because the opacity channel does not properly filter out all white colors for some reason. It also seems that changing the blur affects it, but setting it to 0 does not remove it though (and makes it jaggy).

    Read the article

  • Is there any two-panels bookmarks manager for Google Chrome?

    - by L. Shaydariv
    Hi to all. I'm just wondering, is there any two-panels-interface (like Total Commander, File Manager, etc) bookmark manager, extension, etc for Google Chrome? Using of the default bookmark manager is not so suitable because of two reasons: 1) I've gathered a very large collection of bookmarks (please, don't ask why) 2) bookmarks hierarchy tree always expands its branches when the bookmark manager is open (this makes the moving of bookmarks through the bookmark tree much and much harder). I tried to use Link Commander but it's very and very slow. Any suggestions? Thank you for advices.

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • 389 DS Achitecture for Multiple Sites

    - by Kyle Flavin
    I'm looking to deploy 389 Directory in my environment to replace an existing iPlanet installation. I would be using it primarily to store user account data for authentication purposes. I have two physically separate data centers that I would like to share the same directory tree. My initial thinking is to setup 389 DS as follows: -A Master/Consumer in DataCenter A -A Master/Consumer in DataCenter B -Replication agreement between both masters, to mirror the directory tree in both environments. Does this sound like a reasonable approach? Is there a better way to do it? (ie: four masters?) Is there documentation for best practices when setting up 389 DS in situations such as this? Thanks.

    Read the article

  • window.open() in an iPad on load of a frame does not work

    - by user278859
    I am trying to modify a site that uses "Morten's JavaScript Tree Menu" to display PDFs in a frames set using the Adobe Reader plug-in. On the iPad the frame is useless, so I want to open the PDF in a new tab. Not wanting to mess with the tree menu I thought I could use JavaScript in the web page being opened in the viewer frame to open a new tab with the PDF. I am using window.open() in $(document).ready(function() to open the pdf in the new tab. The problem is that window.open() does not want to work in the iPad. The body of the HTML normally looks like this... <body> <object data="MypdfFileName.pdf#toolbar=1&amp;navpanes=1&amp;scrollbar=0&amp;page=1&amp;view=FitH" type="application/pdf" width="100%" height="100%"> </object> </body> I changed it to only have a div like this... <body> <div class="myviewer" ></div> </body> Then used the following script... $(document).ready(function() { var isMobile = { Android : function() { return navigator.userAgent.match(/Android/i) ? true : false; }, BlackBerry : function() { return navigator.userAgent.match(/BlackBerry/i) ? true : false; }, iOS : function() { return navigator.userAgent.match(/iPhone|iPad|iPod/i) ? true : false; }, Windows : function() { return navigator.userAgent.match(/IEMobile/i) ? true : false; }, any : function() { return (isMobile.Android() || isMobile.BlackBerry() || isMobile.iOS() || isMobile.Windows()); } }; if(isMobile.any()) { var file = "MypdfFileName.pdf"; window.open(file); }else { var markup = "<object data='MypdfFileName.pdf#toolbar=1&amp;navpanes=1&amp;scrollbar=0&amp;page=1&amp;view=FitH' type='application/pdf' width='100%' height='100%'></object>"; $('.myviewer').append(markup); }; }); Everthing works except for window.open() on the iPad. If I switch things around widow.open() works fine on a computer. In another project I am using window.open() successfully on the iPad from an onclick function. I tried using a timer function. I also tried adding an onclick function to the div and posting a click event. In both cases they worked on a computer but not an iPad. I am stumped. I know it would make more sense to handle the ipad in the tree menu frame, but that code is so complex I can't figure out where to put/modify the onclick event. Is there a way to change the object so that it opens in a new tab? Is anyone familiar enough with Mortens Tree Menu code that can tell me how to channge the on click event so that it opens the pdf in a new tab instead of opening a page in the frame? Thanks

    Read the article

  • Splitting an archive on multiple media

    - by Robert Munteanu
    I'm generating archives which are larger than my current physical media ( DVD ). I'd like to split those archives: automatically - instead of generating mini-archives by hand; consistently - so that an archive can be extracted independently of another. For instance for a tree of 24GB which would be archived into 10GB I would get 3 archives, all of them < 4.7 GB and each of them being able to be extracted without the other 2. I'm using dirvish so I'm archiving a filesystem tree. Update: I'm using Linux.

    Read the article

  • how to compare files/directories of 2 separate solaris boxes ?

    - by chz
    Hi Friends I have 2 solaris boxes and I need to check certain directories (on local filesystem and mounted nfs) to make sure that they match up on both boxes and to delete or move the other mismatches to elsewhere on the local filesystem. I investigated for unix commands like rsync, and tree but it appears that these commands are not supported on my Solaris boxes. What is the best approach to this problem with the least pain to solve it ? to use rsync, tree and then diff the outputs or find ? I have trouble limiting the find command to certain directories as there are mounted folders that contain too many xml files that I don't care to much in that directory. What's the find command to search multiple directory paths on a single find command. Thanks Sincerely

    Read the article

  • Why my Firefox displays XML files as blank pages?

    - by n1313
    Every time I open an XML file, all I get is blank page instead of tag tree. The file itself is correct and loads okay, I can see it via View Source or in the Firebug. I've tried turning off all my addons, but the problem was not solved. All other browsers (Chrome, Opera) render the same file as an XML tree. I'm guessing that I've messed up my configuration somehow and Firefox now tries to render XML files as HTML ones. I've tried googling, but with no success. Help, please?

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Steve Miranda is the Next Guest on The Bill Kutik Radio Show®

    - by Jay Richey, HCM Product Marketing
    Be sure to catch Steve Miranda, Senior Vice President for Oracle Fusion Development, tomorrow on The Bill Kutik Radio Show®.  Bill will be asking the tough questions once again and Steve will be answering.  It is sure to be a lively discussion, with more details on Fusion and Oracle's co-existence strategy with PeopleSoft, E-Business Suite, and JD Edwards HCM applications.  Wednesday, March 28, at noon ET, 9 am PT.  Listen live, afterward to the replay, or download from iTunes. http://www.knowledgeinfusion.com/ondemand/docs/DOC-9903 Produced by Knowledge Infusion and hosted by independent industry analyst Bill Kutik, the bi-weekly interview show provides leading HR business content and insight into up-to-the-minute trends.

    Read the article

  • Learning to Grow

    - by jack.flynn
    A Conversation with Ted Simpson of HEUG A great place to revisit Oracle OpenWorld year round is OracleWebVideo on YouTube. Oracle Magazine Senior Editor Jeff Erickson sat down with Ted Simpson at last year's Oracle OpenWorld to find out how the Higher Education Users Group (HEUG) is helping hundreds of member institutions and thousands of individuals across the globe meet the technological challenges in colleges and universities. Simpson joined HEUG back when it was a PeopleSoft special interest group. Now that higher education institutions have expanded into IT infrastructures the size of global corporations or small municipalities, his user group has also been challenged by growth.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >