Search Results

Search found 14131 results on 566 pages for 'note'.

Page 344/566 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • external USB HD doesn't show up on 'sudo fdisk -l' on one Ubuntu, shows up on another (both 'precise')

    - by Menelaos
    I have a 1.5TB WD external USB HDD and two Ubuntu systems, both 'precise'. When I plug the disk on system A it doesn't show up on the output of sudo fdisk -l nor is it automatically mounted (note that that wasn't always the case - it used to appear in the past). When I plug it on system B (again Ubuntu precise) it shows up when I do a sudo fdisk -l (output appended at the end) and mounts automatically just fine. What does this discrepancy point to and what kind of diagnostics should I run / tools I should use to troubleshoot the problem? I followed the suggestion I received to do a sudo tail -f /var/log/syslog and the output is the following when I plug, unplug and replug the USB cable on System A: Sep 14 23:27:09 thorin mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:27:09 thorin mtp-probe: bus: 2, device: 3 was not an MTP device Sep 14 23:28:01 thorin kernel: [ 338.994295] usb 2-2: USB disconnect, device number 3 Sep 14 23:28:04 thorin kernel: [ 341.808139] usb 2-2: new high-speed USB device number 4 using ehci_hcd Sep 14 23:28:04 thorin mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:28:04 thorin mtp-probe: bus: 2, device: 4 was not an MTP device Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting due to inactivity Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting was requested (I guess the last two message are irrelevant). Output of sudo fdisk -l on System B $ sudo fdisk -l Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00070db4 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2905114623 1452556288 83 Linux /dev/sda2 2905116670 2930276351 12579841 5 Extended /dev/sda5 2905116672 2930276351 12579840 82 Linux swap / Solaris Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c849c84 Device Boot Start End Blocks Id System /dev/sdb1 * 63 488375999 244187968+ 7 HPFS/NTFS/exFAT Disk /dev/sdg: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003e17f Device Boot Start End Blocks Id System /dev/sdg1 2048 2930272255 1465135104 7 HPFS/NTFS/exFAT

    Read the article

  • Loading another domain's content in a modal iframe - acceptable?

    - by user568458
    Is it okay to load another page in an iframe in a modal pop-up window - in terms of legal and ethical standards around displaying 3rd party content? I remember a few years ago there was controversy and a debate about whether it was okay to load another domain's page content on your domain in a full-width iframe, with your site providing a masthead with controls for favouriting, linking etc (e.g. like StumbleUpon). I seem to recall that the consensus was, that it was okay so long as you were clearly in no way claiming ownership of the 3rd party content or attempting to modify the content and so long as there was a 'go to site' button or equivalent; and that sites could ask you to exclude them, but generally speaking, it's an acceptable practice. How acceptable would it be considered to be to load another site's page within a modal (lightbox-like) popup box (following all the above principles: clear attribution and a prominent button that kills the iframe and gives them the 3rd party original)? My expectation would be that it would follow the same principles, and be acceptable so long as these conditions were met. Note that I'm asking about the likely legitimate responses of the 3rd party sites and possible legal position, not about usability or UX. I'm aware that this should never ever ever ever ever be the standard way external links are loaded, and that 99% of the time linking to external content like this would be terrible for usability. My specific use case is one of those 1% of cases where loading a separate page in this tab actually wouldn't be the expected behaviour of a link: an interactive data visualisation tool that also acts as a 'browser' of external content (science papers underlying the data it navigates). All other links within the interactive will change something while staying on the same page. If the user clicked one of these external links by mistake (as people often do, even when they are clearly, noisily labelled) and then had to back-button back, they would lose their fine-grained position in the interactive tool (jquery bbq hashchanges being not appropriate for all elements of the tool). New window/tab will simply open the target page on the 3rd party domain. Opening a new window/tab would also be an alternative option (and has its own disadvantages) - my question is, whether this is an alternative that could be considered (in terms of acceptable practice around intellectual property etc), irrespective of which option is best for UX: which is something we'll decide the proper way, based on actual UX testing.

    Read the article

  • Do I need to go to a big-name university?

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • Partner BI Applications 4-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} 12th - 15th February 2012, Oracle Reading (UK) - REGISTER NOW This training will provide attendees with an in-depth working understanding of the architecture, the technical and the functional content of the Oracle Business Intelligence Applications, whilst also providing an understanding of their installation, configuration and extension. The course will cover the following topics: Overview of Oracle Business Intelligence Applications Oracle BI Applications Fundamentals and Features Configuring BI Applications for Oracle E-Business Suite Understanding BI Applications Architecture Fundamentals of BI Applications Security Prerequisites - This training is only for OPN member Partners. Good understanding of basic data warehousing concepts Hands on experience in Oracle Business Intelligence Enterprise Edition Hands on experience in Informatica Good understanding of any of the following Oracle EBS modules: General Ledger, Accounts Receivables, Accounts Payables Some understanding of  Oracle BI Applications is required (See Sales & Technical Tutorials for OBI, BI-Apps and Hyperion EPM)  Please note that attendees are required to bring a laptop. Laptop 4GB RAM-Recognized by Windows 64 bits 80GB free space in Hard drive or External Device CPU Core 2 Duo or Higher Operating System Requirements Windows 7, Windows XP, Windows 2003 NOT ALLOWED with Windows Vista An Administrator User

    Read the article

  • Small script to look for Project Replication actions that have failed

    - by Trond Strømme
    Today when looking at a couple of projects on a ZFS 7320 Storage Appliance I noticed on one project that one of its replication actions had failed, as I hadn't checked the Recent Alerts log yet I was not aware of this. I decided to write a small script to check if there were others that had failed. Nothing fancy, just a loop through all projects, look at the project's replication child and compare the values of the last_sync and last_try properties and print the result if they're not equal. (There are probably more sensible ways of doing this, but at least it involves me getting the chance to put on my headphones and doing just a little bit of coding.) script // this script will locate failed project level replication // it will look at the sync times for 'last_sync' and 'last_try' // and compare these, if they deviate you should investigate. // NOTE! this code is offered 'as is' Run at your own risk, // it will probably work as intended, but in now way can I // (or Oracle) be held responsible if your server starts behaving // like a three year old kid in a candy store.. (not that mine do, // they are very well behaved boys...) run('configuration'); run('storage'); printf('Host: %s, pool: %s\n', get('owner'),get('pool')); run('cd /'); run('shares'); proj=list(); printf("total projects: %d\n",proj.length +'\n'); // just for project level replication for(i=0;i<proj.length;i++){ run('select '+proj[i]); run('replication'); //get all replication actions preps = list(); for(j=0;j<preps.length;j++){ run('select ' + preps[j]); last_sync = get('last_sync'); last_try = get('last_try'); // printf("target %s\n", get('target')); //why the flip does this not get the proper name? if(!( last_sync.valueOf() === last_try.valueOf())){ printf("sync has failed for %s %s\n", proj[i], get('target')); }else{ // printf("OK %s %s\n", proj[i], get('target')); } run('done'); //done with the replica action } run('done'); run('done'); } printf("finished\n"); For a more on how to run the script, or testing it please look at my previous post. Sample output: Host: elb1sn01, pool: exalogic total projects: 45 sync has failed for ACSExalogicSystem cb3a24fe-ad60-c90f-d15d-adaafd595639 finished

    Read the article

  • Announcing Key Functional White Papers for SIM and ReIM

    - by Oracle Retail Documentation Team
    Oracle Retail has published two new documents on My Oracle Support (https://support.oracle.com)  that provide partners and retailers with deeper functional information about two products: Oracle Retail Store Inventory Management (SIM) and Oracle Retail Invoice Matching. Oracle Retail Store Inventory Management Item Configuration White Paper (Doc ID 1507221.1) There is functionality within the Store Inventory Management system related to item configuration that spans across multiple concepts that apply to the application as a whole rather than to a specific area. This white paper covers numerous topics around item configuration including: Item Transaction Levels Item Long Description Pack Size Standard Unit of Measure Standard Unit of Measure Conversion Pack Items Simple Pack Conversion Items (Notional Packs) Ranging Items Item Status Non-Sellable Items Type-2 Item Recognition UPC-E Barcodes Non-Inventory Items Consignment and Concession Items Quick Response Codes Oracle Retail Invoice Matching Financial Transactions (Doc ID 1500209.1) This document explains the financial transactions that are posted by Oracle Retail Invoice Matching (ReIM). The scope of the document is limited to ReIM transactions only, and does not explain Retail Merchandising System (RMS), Finance, or Account Receivable transactions. ReIM follows the double-entry accounting standard, which works by recording the debit and credit of each financial transaction belonging to each party involved. Each transaction means a profit to one account (debit) and a loss to another account (credit). Full invoice match processing is completed in ReIM with payment recommendations communicated to Oracle Accounts Payable. ReIM matches merchandise orders and receipts against merchandise invoices, performing automated and manual matching, as well as discrepancy-resolution processing. Matched invoices are posted to interface staging tables specifying the amount and date to pay, vendor, site ID, General Ledger Chart of Accounts (GL CoA) information, and payment terms. Other payables documents, including debit memos, credit memos and credit notes are also interfaced to Accounts Payable through the ReIM staging tables (IM_AP_STAGE_HEAD and IM_AP_STAGE_DETAIL). For information about how ReIM engages in this processing, see the latest Oracle Retail Invoice Matching Operations Guide. Certain ReIM transactions are not interfaced to Oracle Payables, but instead are interfaced to Oracle General Ledger through the IM_FINANCIAL_STAGE table. When analyzing transactions posted through the staging tables, retailers should note the transaction type, Standard/Credit, as well as the sign in the amount field. Technically, a negative sign on a credit transaction changes the transaction to a debit entry, and vice versa. This document is concerned about the financial meaning of the transactions, and will avoid a discussion of negative numbers in T-charts.

    Read the article

  • Why is my arrow texture being drawn in odd places?

    - by tyjkenn
    This is a script I wrote that places an arrow on the screen, pointing to an enemy off-screen, or, if the enemy is on-screen, it places an arrow hovering above the enemy. Everything seems to work, except for some odd reason, I see random arrows floating around, often skewed and resized (which I really don't understand, because I only rotate and place in this script). Even when I only have one enemy in the scene, I still see these random arrows. It should only be drawing one per enemy. Note: when all enemies are removed, no arrows appear. var arrow : Texture; var cam : Camera; var dim : int = 30; function OnGUI() { var objects = GameObject.FindGameObjectsWithTag("Enemy"); for(var ob : GameObject in objects) { var pos = cam.WorldToViewportPoint(ob.transform.position); if(gameObject.GetComponent(FollowCamera).target != null){ var tar = gameObject.GetComponent(FollowCamera).target.parent; } if(pos.z>1 && ob.transform != tar){ var xDiff = (pos.x*cam.pixelWidth)-(cam.pixelWidth/2); var yDiff = (pos.y*cam.pixelHeight)-(cam.pixelHeight/2); var angle = Mathf.Rad2Deg*Mathf.Atan(yDiff/xDiff)+180; if(xDiff>0) angle += 180; var dist = Mathf.Sqrt(xDiff*xDiff + yDiff*yDiff); var slope = yDiff/xDiff; var camSlope = cam.pixelHeight/cam.pixelWidth; var theX = -1000.0; var theY = -1000.0; var mult = 0; var temp; if(Mathf.Abs(xDiff)>(cam.pixelWidth/2)||Mathf.Abs(yDiff)>(cam.pixelHeight/2)){ //touching right if(slope<camSlope && slope>-camSlope) { if(xDiff>(cam.pixelWidth/2)) { theX = cam.pixelWidth - (dim/2); mult = -1; }else if(xDiff<-(cam.pixelWidth/2)) { theX = (dim/2); mult = 1; } temp = ((cam.pixelWidth/2)*yDiff)/xDiff; theY =(cam.pixelHeight/2)+(mult*temp); } else{ if(yDiff>(cam.pixelHeight/2)) { theY = (dim/2); mult = 1; }else if(yDiff<-(cam.pixelHeight/2)) { theY = cam.pixelHeight - (dim/2); mult = -1; } temp = ((cam.pixelHeight/2)*xDiff)/yDiff; theX =(cam.pixelWidth/2)+(mult*temp); } } else { angle = -90; theX = (cam.pixelWidth/2)+xDiff; theY = (cam.pixelHeight/2)-yDiff-dim; } GUIUtility.RotateAroundPivot(-angle, Vector2(theX, theY)); Graphics.DrawTexture(Rect(theX-(dim/2),theY-(dim/2),dim,dim),arrow,null); GUIUtility.RotateAroundPivot(angle, Vector2(theX, theY)); } } }

    Read the article

  • Ask the Readers: Which Browser is a Must-Have for You on Linux?

    - by Asian Angel
    Linux systems all come with their own particular set of default browsers, but those browsers may not be the ones you want or need. This week we would like to know which browser (or browsers) are considered “must-have” on your Linux systems. As a general rule many Linux distributions have Firefox and/or Konqueror as one of the default installation browsers. During this past year the open source browser Chromium has also been gaining a lot of traction as a default install for systems. For most people these browsers are the ones that they like best or feel work well enough to not make any changes. But there are other people who want more than what is available with a default system install. They may favor a particular browser for its’ extensibility or speed…others prefer a particular browser for its’ features or minimalist UI. Whatever your preferences may be, there is a browser out there to fit your style. Some people may even prefer to run only bleeding edge nightly releases or add them in with their current browsers. The important part is that you have choices when it comes to your Linux system. What we would like to know this week is which browser or browsers you make sure are always installed on your Linux systems. Does the Linux system you use already have your favorite browser installed as part of the default set? Maybe you are content with using the default set of browsers that come with the system. Or perhaps you prefer to rework the entire browser setup on your system by removing the defaults and adding your favorites. Let us know which browsers you consider “must have” and why in the comments! Note: You can make up to two selections on today’s poll since most people will likely have more than one browser that they make certain is always installed. How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC BotSync Enables Secure FTP File Synchronization on Android Devices Enjoy Beautiful City Views with the Cityscape Theme for Windows 7 Luigi Installs Any OS on Google’s Cr-48 Notebook DIY iPad Stylus Offers Pen-Based Interaction on the Cheap Serene Blue Ubuntu Wallpaper for Your Desktop Enjoy Old School Style Video Game Fun with Chicken Invaders

    Read the article

  • Uniform not being applied to proper mesh

    - by HaMMeReD
    Ok, I got some code, and you select blocks on a grid. The selection works. I can modify the blocks to be raised when selected and the correct one shows. I set a color which I use in the shader. However, I am trying to change the color before rendering the geometry, and the last rendered geometry (in the sequence) is rendered light. However, to debug logic I decided to move the block up and make it white, in which case one block moves up and another block becomes white. I checked all my logic and it knows the correct one is selected and it is showing in, in the correct place and rendering it correctly. When there is only 1 it works properly. Video Of the bug in action, note how the highlighted and elevated blocks are not the same block, however the code for color and My Renderer is here (For the items being drawn) public void render(Renderer renderer) { mGrid.render(renderer, mGameState); for (Entity e:mGameEntities) { UnitTypes ut = UnitTypes.valueOf((String)e.getObject(D.UNIT_TYPE.ordinal())); if (ut == UnitTypes.Soldier) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.texture_soldier.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); if (mSelectedEntities.contains(e)) { mEntityMatrix.translate(pos.x, 1f, pos.y); renderer.testShader.setUniformf("v_color", 0.5f,0.5f,0.5f,1f); } else { mEntityMatrix.translate(pos.x, 0f, pos.y); renderer.testShader.setUniformf("v_color", 1f,1f,1f,1f); } mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_soldier.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } else if (ut == UnitTypes.Enemy_Infiltrator) { renderer.testShader.begin(); renderer.testShader.setUniformMatrix("u_mvpMatrix",mEntityMatrix); renderer.testShader.setUniformf("v_color", 1.0f,1,1,1.0f); renderer.texture_enemy_infiltrator.bind(0); Vector2 pos = (Vector2) e.getObject(D.COORDS.ordinal()); mEntityMatrix.set(renderer.mCamera.combined); mEntityMatrix.translate(pos.x, 0f, pos.y); mEntityMatrix.scale(0.2f, 0.2f, 0.2f); renderer.model_enemy_infiltrator.render(renderer.testShader,GL20.GL_TRIANGLES); renderer.testShader.end(); } } }

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Update in Certification Exam Score Report Access Process!

    - by Richard Lefebvre
    Please note that exam results for all Oracle Certification exams will be accessed through CertView, starting October 30th, 2012. Exam results will no longer be available at the test center, or on the Pearson VUE website. Candidates will receive an email from Oracle within 30 minutes of completing the exam to let them know that their exam results are available on CertView. Candidates must have an Oracle Web Account to access CertView. This new process applies to exam results for all Oracle Certification exams - proctored and non-proctored as well beta exams. CertView, Oracle's self-service certification portal will be the partners’ one stop source for all their certification and exam history! Other benefits of this change include: driving all candidates to have an Oracle Web Account which will lead to tighter integration with Oracle University records in the future, increased security around data privacy and a higher validity rate for candidate email addresses. Existing benefits of CertView include, self-service access to exam and certification records and logos, and access to Oracle's self service certification verification. Accessing Exam Results  Returning CertView Users ·         Click the link in the email sent by Oracle or go to certview.oracle.com ·         Select the See My New Exam Results Now link to view exam results ·         Select the Print My New Exam Results Now link to print exam results  New CertView Users - Who Have An Oracle Web Account ·         First time Users must authenticate their CertView account ·         Account Authentication requires the Oracle Testing ID and email address from your Pearson VUE profile ·         Click the link in the email sent by Oracle or go to certview.oracle.com and follow the Authenticate My CertView Account link.  New CertView Users - Who Do Not Have An Oracle Web Account ·         CertView users are required to have an Oracle Web Account ·         To create an Oracle Web Account, go to certview.oracle.com and select theCreate My Oracle Web Account Now link. Then follow the remaining instructions under I do not have an Oracle Web Account on that page.

    Read the article

  • How to set the initial component focus

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In ADF Faces, you use the af:document tag's initialFocusId to define the initial component focus. For this, specify the id property value of the component that you want to put the initial focus on. Identifiers are relative to the component, and must account for NamingContainers. You can use a single colon to start the search from the root, or multiple colons to move up through the NamingContainers - "::" will pop out of the component's naming container and begin the search from there, ":::" will pop out of two naming containers and begin the search from there. Alternatively you can add the naming container IDs as a prefix to the component Id, e.g. nc1:nc2:comp1. http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_document.html To set the initial focus to a component located in a page fragment that is exposed through an ADF region, keep in mind that ADF Faces regions - af:region - is a naming container too. To address an input text field with the id "it1" in an ADF region exposed by an af:region tag with the id r1, you use the following reference in af:document: <af:document id="d1" initialFocusId="r1:0:it1"> Note the "0" index in the client Id. Also, make sure the input text component has its clientComponent property set to true as otherwise no client component exist to put focus on.

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • What is a good design model for my new class?

    - by user66662
    I am a beginning programmer who, after trying to manage over 2000 lines of procedural php code, now has discovered the value of OOP. I have read a few books to get me up to speed on the beginning theory, but would like some advice on practical application. So,for example, let's say there are two types of content objects - an ad and a calendar event. what my application does is scan different websites (a predefined list), and, when it finds an ad or an event, it extracts the data and saves it to a database. All of my objects will share a $title and $description. However, the Ad object will have a $price and the Event object will have $startDate. Should I have two separate classes, one for each object? Should I have a 'superclass' with the $title and $description with two other Ad and Event classes with their own properties? The latter is at least the direction I am on now. My second question about this design is how to handle the logic that extracts the data for $title, $description, $price, and $date. For each website in my predefined list, there is a specific regex that returns the desired value for each property. Currently, I have an extremely large switch statement in my constructor which determines what website I am own, sets the regex variables accordingly, and continues on. Not only that, but now I have to repeat the logic to determine what site I am on in the constructor of each class. This doesn't feel right. Should I create another class Algorithms and store the logic there for each site? Should the functions of to handle that logic be in this class? or specific to the classes whos properties they set? I want to take into account in my design two things: 1) I will add different content objects in the future that share $title and $description, but will have their own properties, so, I want to be able to easily grow these as needed. 2) I will add more websites constantly (each with their own algorithms for data extraction) so I would like to plan efficienty managing and working with these now. I thought about extending the Ad or Event class with 'websiteX' class and store its functions there. But, this didn't feel right either as now I have to manage 100s of little website specific class files. Note, I didn't know if this was the correct site or stackoverflow was the better choice. If so, let me know and I'll post there.

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • How do I know if I am using Scrum methodologies?

    - by Jake
    When I first started at my current job, my purpose was to rewrite a massive excel-VBA workbook-application to C# Winforms because it was thought that the new C# app will fix all existing problems and have all the new features for a perfect world. If it were a direct port, in theory it would be easy as i just need to go through all the formulas, conditional formatting, validations, VBA etc. to understand it. However, that was not the case. Many of the new features are tightly dependant on business logic which I am unfamiliar with. As a solo programmer, the first year was spent solely on deciphering the excel workbook and writing the C# app. In theory, I had the business people to "help" me specify requirements, how GUI looks and work, and testing of the app etc; but in practice it is like a contant tsunami of feature creep. At the beginning of the second year I managed to convince the management that this is not going anywhere. I made them start from scratch with the excel-VBA. I have this "issue log" saved on the network, each time they found something they didn't like about the excel-VBA app, they will write it in there. I check the log daily and consolidate issues (in my mind) mainly into 2 groups: (1) requires massive change. (2) can be fixed in current version. For massive change issues, I make a copy of the latest excel-VBA and give it a new version number, then work on it whenever I can. For current version fixes, I make the changes in a few days to a week, and then immediately release it. I also ensure I update the same change in any in-progress massive change future versions. This has gone on for about 4 months and I feel it works great. I made many releases and solved many real issues, also understood the business logic more and more. However, my boss (non-IT trained) thinks what I am doing are just adhoc changes and that i am not looking at the "bigger picture". I am struggling to convince my boss that this works. So I hope to formalise my approach and maybe borrow a buzzword to confuse him. Incidentally, I read about Agile and SCRUM, about backlog and sprints. But it's all very vague to me still. QUESTION (finally): I want to tell him that this is SCRUM! But I want to hold my breath first and ask whether my current approach is considered SCRUM or SCRUM-like? How can I make it more SCRUM-like? Note that I have only myself, there's no project leader or teams.

    Read the article

  • Cheese won't start

    - by Anthony Hohenheim
    I can't start Cheese Webcam Booth. It starts loading and there is a brief moment when the window shows up but then it disappears, like it shuts itself down and it's not in system monitor. My webcam works perfectly in Skype video call. I installed and run Camorama and it gave me an error: Could not connect to video device (/dev/video0) Please check connection When I run the lsusb I get this line for my webcam: Bus 002 Device 002: ID 04f2:b210 Chicony Electronics Co., Ltd And for my graphic card, running lspci: VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) It's not a pressing matter, but it bugs my nerves, if it works on Skype, why does Cheese and other programs refuse to run. As I said, it's not a big deal but any help would be appreciated. Running Cheese in terminal: (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkHBox to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkButton, but as a GtkBin subclass a GtkButton can only contain one widget at a time; it already contains a widget of type GtkLabel (cheese:11454): Gdk-WARNING **: The program 'cheese' received an X Window System error. This probably reflects a bug in the program. The error was 'BadDrawable (invalid Pixmap or Window parameter)'. (Details: serial 932 error_code 9 request_code 137 minor_code 9) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)

    Read the article

  • C# Preprocessor Directives

    - by MarkPearl
    Going back to my old c++ days at university where we had all our code littered with preprocessor directives - I thought it made the code ugly and could never understand why it was useful. Today though I found a use in my C# application. The scenario – I had made various security levels in my application and tied my XAML to the levels by set by static accessors in code. An example of my XAML code for a Combobox to be enabled would be as follows… <ComboBox IsEnabled="{x:Static security:Security.SecurityCanEditDebtor}" />   And then I would have a static method like this… public static bool SecurityCanEditDebtorPostalAddress { get { if (SecurityCanEditDebtorPostalAddress) { return true; } else { return false; } } } My only problem was that my XAML did not like the if statement – which meant that while my code worked during runtime, during design time in VS2010 it gave some horrible error like… NullReferenceException was thrown on “StatiucExtension”: Exception has been thrown by the target of an invocation… If however my C# method was changed to something like this… public static bool SecurityCanEditDebtorPostalAddress { get { return true; } }   My XAML viewer would be happy. But of course this would bypass my security… <Drum Roll> Welcome preprocessor directives… what I wanted was during my design experience to totally remove the “if” code so that my accessor would return true and not have any if statements, but when I release my project to the big open world, I want the code to have the is statement. With a bit of searching I found the relevant MSDN sample and my code now looks like this… public static bool SecurityCanEditDebtorPostalAddress { get { #if DEBUG return true; #else if (Settings.GetInstance().CurrentUser.SecurityCanEditDebtorPostalAddress) { return true; } else { return false; } #endif } }   Not the prettiest beast, but it works. Basically what is being said here is that during my debug mode compile my code with just the code between the #if … #else block, but what I can now do is if I want to universally switch everything to the “if else” statement, I just go to my project properties –> Build and change the “Debug” flag as illustrated in the picture below. Also note that you can define your own conditional compilation symbols, and if you even wanted to you could skip the whole properties page and define them in code using the #define & #undef directives. So while I don’t like the way the code works and would like to look more into AOP and compare it to this method, it works for now.

    Read the article

  • IPS Package Groups

    - by Alan_Solaris_RE
    IPS group packages consist solely of dependencies on other packages that make up a logical grouping of software. These are similar to, but not the equivalent of, Solaris 10 metaclusters. The main difference is that metaclusters are nested subsets ranging from a minimal install to nearly all packages on the media. Group packages have no such hierarchy. They can overlap other groups, or be completely disjoint sets. A group dependency is set this way in an IPS package manifest file: depend fmri=full/pkg/name type=group Current Solaris Groups Solaris currently has 4 system groups defined. These are used for different types of installation, and are included in the xml manifest files used by the various Solaris installers: Package Name Summary Description Default Installation For:  group/system/solaris-desktop Oracle Solaris Desktop Provides an Oracle Solaris desktop environment Live Media  group/system/solaris-large-server Oracle Solaris Large Server Provides an Oracle Solaris large server environment Text Installer  group/system/solaris-small-server Oracle Solaris Small Server Provides a useful command-line Oracle Solaris environment  Zones  group/system/solaris-auto-install  Oracle Solaris Automated Installer Client  Provides an Oracle Solaris Automated Installer client  Automated Installer There are also several "feature" groups such as AMP and GNU Developer Tools. These are provided for convenience, but are not used directly by any installers. Retrieving Group Package Information A listing of all current groups can be found with the command: pkg info -r group/* A listing of all the packages in a group can be obtained with: pkg contents -o fmri -H -rt depend -a type=group groupname An example: $ pkg contents -o fmri -H -rt depend -a type=group solaris-desktop archiver/gnu-tar audio/audio-utilities codec/flac codec/libtheora codec/ogg-vorbis codec/speex communication/im/pidgin etc. You can determine which package group is currently installed on your system: $ pkg list group/system/\* Output would look like: NAME (PUBLISHER) VERSION IFO group/system/solaris-desktop 0.5.11-0.175.0.0.0.0.0 i-- Note that there are not version numbers associated with a group package dependency. The package version that best fits the system will be used, based on other dependencies such as what is listed in incorporation files. Installing a Group To Install a group, simple use the group package name as you would any other package: $ pkg install solaris-small-server  If you want to exclude a package from installing, you can use the --reject flag: $ pkg install --reject audio/audio-utilities solaris-desktop Creating Your Own Group To create your own group package, you can follow the pkg(5) documentation on how to create a package, and use this action for each package that is part of your group:   depend fmri=full/pkg/name type=group

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • Is OOP hard because it is not natural?

    - by zvrba
    One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies. Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: "my screen is on top of the table"; "I (a human being) am sitting on a chair"; "a car is on the road"; "I am typing on the keyboard"; "the coffee machine boils water", "the text is shown in the terminal window." We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance. Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window". Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher "importance" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a "wrong" choice can lead to convoluted and hard to maintain code.] I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach. Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >