Search Results

Search found 12373 results on 495 pages for 'copy reg'.

Page 116/495 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • How to Never Use iTunes With Your iPhone, iPad, or iPod Touch

    - by Chris Hoffman
    iTunes isn’t an amazing program on Windows. There was a time when Apple device users had to plug their devices into their PCs or Macs and use iTunes for device activation, updates, and syncing, but iTunes is no longer necessary. Apple still allows you to use iTunes for these things, but you don’t have to. Your iOS device can function independently from iTunes, so you should never be forced to plug it into a PC or Mac. Device Activation When the iPad first came out, it was touted as a device that could replace full PCs and Macs for people who only needed to perform light computing tasks. Yet, to set up a new iPad, users had to plug it into a PC or Mac running iTunes and use iTunes to activate the device. This is no longer necessary. With new iPads, iPhones, and iPod Touches, you can simply go through the setup process after turning on your new device without ever having to plug it into iTunes. Just connect to a Wi-Fi or cellular data network and log in with your Apple ID when prompted. You’ll still see an option that allows you to activate the device via iTunes, but this should only be necessary if you don’t have a wireless Internet connection available for your device. Operating System Updates You no longer have to use Apple’s iTunes software to update to a new version of Apple’s iOS operating system, either. Just open the Settings app on your device, select the General category, and tap Software Update. You’ll be able to update right from your device without ever opening iTunes. Purchased iTunes Media Apple allows you to easily access content you’ve purchased from the iTunes Store on any device. You don’t have to connect your device to your computer and sync via iTunes. For example, you can purchase a movie from the iTunes Store. Then, without any syncing, you can open the iTunes Store app on any of your iOS devices, tap the Purchased section, and see stuff you’ve downloaded. You can download the content right from the store to your device. This also works for apps — apps you purchase from the App Store can be accessed in the Purchased section on the App Store on your device later. You don’t have to sync apps from iTunes to your device, although iTunes still allows you to. You can even set up automatic downloads from the iTunes & App Store settings screen. This would allow you to purchase content on one device and have it automatically download to your other devices without any hassle. Music Apple allows you to re-download purchased music from the iTunes Store in the same way. However, there’s a good chance you have your own music you didn’t purchase from iTunes. Maybe you spent time ripping it all from your old CDs and you’ve been syncing it to your devices via iTunes ever since. Apple’s solution for this is named iTunes Match. This feature isn’t free, but it’s not a bad deal at all. For $25 per year, Apple allows you to upload all your music to your iCloud account. You can then access all your music from any iPhone, IPad, or iPod Touch. You can stream all your music — perfect if you have a huge library and little storage on your device — and choose which songs you want to download to your device for offline use. When you add additional music to your computer, iTunes will notice it and upload it using iTunes Match, making it available for streaming and downloading directly from your iOS devices without any syncing. This feature is named iTunes Match because it doesn’t just upload music — if Apple already has a song you upload, it will “match” your song with Apple’s copy. This means you may get higher-quality versions of your songs if you ripped them from CD at a lower bitrate. Podcasts You don’t have to use iTunes to subscribe to podcasts and sync them to your devices. Even if you have a lowly iPod Touch, you can install APple’s Podcasts app from the app store. Use it to subscribe to podcasts and configure them to automatically download directly to your device. You can use other podcast apps for this, too. Backups You can continue backing up your device’s data through iTunes, generating local backups that are stored on your computer. However, new iOS devices are configured to automatically back up their data to iCloud. This happens automatically in the background without you even having to think about it, and you can restore such backups when setting up a device simply by logging in with your Apple ID. Personal Data In the days of PalmPilots, people would use desktop programs like iTunes to sync their email, contacts, and calendar events with their mobile devices. You probably shouldn’t have to sync this data form your computer. Just sign into your email account — for example, a Gmail account — on your device and iOS will automatically pull your email, contacts, and calendar events from your associated account. Photos Rather than connecting your iOS device to your computer and syncing photos from it, you can use an app that automatically uploads your photos to a web service. Dropbox, Google+, and even Flickr all have this feature in their apps. You’ll be able to access your photos from any computer and have a backup copy without any syncing required. You may still need to use iTunes if you want to sync local music without paying for iTunes Match or copy local video files to your device. Copying large local files over is the only real scenario where you’d need iTunes. If you don’t need to copy such files over, you can go ahead and uninstall iTunes from your Windows PC if you like. You shouldn’t need it.     

    Read the article

  • Top 10 Browser Productivity Tips

    - by Renso
    Originally posted on: http://geekswithblogs.net/renso/archive/2013/10/14/top-10-browser-productivity-tips.aspxYou don’t have to be a geek to be a productive browser user. The tips below have been selected by actions users take most of the time to navigate a web-site but use long-standing keyboard or mouse actions to get them done, when there are keyboard short-cuts you can use instead. Since you hands are already on the keyboard it is almost always faster to sue a keyboard shortcut to get something done that you usually used the mouse for. For example right-clicking on something to copy it and then doing to same for pasting something is very time consuming, keyboard shortcuts have been created that simplify the task. All it takes are a few memory brain cells to remember them. Here are the tips, in no particular order:   Tip 1 Hold down the spacebar on your keyboard to page to the end of your web page rather than using your mouse. This is really a slow way of doing it. If you want to page one page at a time, hit the spacebar once, and again to page again. But if you want to page all the way to the end of the web page simply hit Ctrl+End (that is hold down the Ctrl key and hit the End button on your keyboard). To get to the top of your web page, simply hit Ctrl + Home to go all the way to the top of your web page. Tip 2 Where are my downloads? Some folks run downloads again-and-again because they do not know where the last one went and they do not see the popup, or browser note on their web page in the footer, etc. Simply hit Ctrl+J. Works in most browsers. Tip 3 Selecting a US state from a drop down box. Don’t use the mouse, takes just way too long to scroll. When you tab to the drop down box or click on it with your mouse, simply hit the first character of the state and it will be selected. For Texas for example hit the letter “T” twice on your keyboard to get to it. The same concept can be applied to any drop down box that is alphabetical or numerically sorted. Tip 4 Fixing spelling errors. All modern-day browsers support this now. You see the red wavy lines underscoring a word, yes it is a spelling error. How do you fix it? Don’t overtype it or try and fix it manually, fist right-click on it and a list of suggestions comes up. If it does not show up, like my name “Renso” and you know how to spell your name as in this example, look further down the list of options (the little window popup that appears when you right click) and you should see an option to “Add to Dictionary”. Be warned, when you add it, it only adds it to the browser you’re using’s dictionary. If you use Google Chrome, Firefox and IE, each one will have their own list. Tip 5 So you have trouble seeing the text on the screen. Or you are looking at a photo, for example in Facebook. You want to zoom in to read better or zoom into a photo a bit more. Hit Ctrl++ (hold down Ctrl key and hit the plus key – actually it’s the equal key but it is easier to remember that it is plus for bigger). Hit the minus to zoom out. Now you can’t remember what the original size was since you were so excited to hit it 20 times, or was that 21… Simply hit Ctrl+0 (that is zero) and it will reset it to the default. Tip 6 So you closed a couple of tabs in your browser. Suddenly you remember something you wanted to double-check something on one of the tabs, you cannot remember the URL ad the tab is gone forever, or is it? Simply hit Ctrl+Shift+t and it will bring back your tabs one by one each time you click the T. This has also been a great way for me to quickly close some tabs because I don’t want my boss to see I’m shopping and then hitting Ctrl+Shift+t to quickly get it back and complete my check-put and purchase. Or, for parents, when you walk into your daughter’s room and you see she quickly clicks and closes a window/tab in here browser. Not to worry my little darling, daddy will Ctrl+Shift+t and see what boys on Facebook you were talking too… Tip 7 The web browser is frozen on your PC/Laptop/Whatever, in this example it may be your Internet Explorer browser. I don’t mention Firefox or Chrome here because it probably never happens in their world. You cannot close it, it won’t respond to anything you have done s far except for the next step you are about to take, which is throw your two-day old coffee on your keyboard. This happens especially on sites that want to force you to complete a purchase order. Hit Ctrl+Alt+Del on your keyboard on any version of windows, select TASK MANAGER. In the  First Tab, which is the Process Tab, look for the item in question. In this example you should see Internet Explorer. Right-click it and select “End Task”. It will force the thread out of memory and terminate that process. You can of course do this with any program running under your account. Tip 8 This is a personal favorite of mine. To select words in the paragraph without using the mouse. You don’t want to select one character at a time like when you use the Ctrl+arrows as it can be very slow if you want to select a lot of text. You also want to select whole words. Simply use the Ctrl+Shift_arrow (right or left depending which direction you want to go. Tip 9 I was a bit reluctant to add this one, but being in the professional services industry still come across many-a-folk that simply can’t copy-and-paste them-all text or images that reside on them screens, y’all. Ctrl+c to copy and Ctrl+v to paste it. Works a lot faster than using the mouse. You may be asking: “Well why in the devil did they not use Ctrl+p for paste…. because that is for printing. This is of course not limited to the browser world, it applies to almost any piece of software running on PC or Mac. Go try it on an image on your browser, right-click it and select copy. Open a word document and Ctrl+v to paste the image in there. Please consider copyright laws. Tip 10 Getting rid of annoying ads. Now this only works when you load a web page, meaning when you get back to the same page later you will have to do this again and you will need to learn a tool to do it, WELL WORTH IT. For example, I use GrooveShark to listen to music but I don’t like the ads they show. Install a tool like Firebug for Firefox or use the Ctrl+Shift+I on Chrome to bring up the developer toolbar. Shows at the bottom of the page. With Firefox, once you have installed Firebug as an add-on, a yellow bug should appear on the top right-hand-side of your browser, click on it to display the developer toolbar. You will need to learn how to use it, but once you know how to select an item/section on the window (usually just right-click the add you don’t want to see and select “Inspect Element”, the developer toolbar will appear (if not already there)) and then simply hit delete and it will remove the add from the screen. If you don’t know HTML you may need to play with it a bit, but once you understand how it works can open up a whole new world for you on how web pages actually work. If you can think of any others that have saved you a ton of time please let me know so I can add them to a top 99 list.

    Read the article

  • Copying one form's values to another form using JQuery

    - by rsturim
    I have a "shipping" form that I want to offer users the ability to copy their input values over to their "billing" form by simply checking a checkbox. I've coded up a solution that works -- but, I'm sort of new to jQuery and wanted some criticism on how I went about achieving this. Is this well done -- any refactorings you'd recommend? Any advice would be much appreciated! The Script <script type="text/javascript"> $(function() { $("#copy").click(function() { if($(this).is(":checked")){ var $allShippingInputs = $(":input:not(input[type=submit])", "form#shipping"); $allShippingInputs.each(function() { var billingInput = "#" + this.name.replace("ship", "bill"); $(billingInput).val($(this).val()); }) //console.log("checked"); } else { $(':input','#billing') .not(':button, :submit, :reset, :hidden') .val('') .removeAttr('checked') .removeAttr('selected'); //console.log("not checked") } }); }); </script> The Form <div> <form action="" method="get" name="shipping" id="shipping"> <fieldset> <legend>Shipping</legend> <ul> <li> <label for="ship_first_name">First Name:</label> <input type="text" name="ship_first_name" id="ship_first_name" value="John" size="" /> </li> <li> <label for="ship_last_name">Last Name:</label> <input type="text" name="ship_last_name" id="ship_last_name" value="Smith" size="" /> </li> <li> <label for="ship_state">State:</label> <select name="ship_state" id="ship_state"> <option value="RI">Rhode Island</option> <option value="VT" selected="selected">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="ship_zip_code">Zip Code</label> <input type="text" name="ship_zip_code" id="ship_zip_code" value="05401" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div> <div> <form action="" method="get" name="billing" id="billing"> <fieldset> <legend>Billing</legend> <ul> <li> <input type="checkbox" name="copy" id="copy" /> <label for="copy">Same of my shipping</label> </li> <li> <label for="bill_first_name">First Name:</label> <input type="text" name="bill_first_name" id="bill_first_name" value="" size="" /> </li> <li> <label for="bill_last_name">Last Name:</label> <input type="text" name="bill_last_name" id="bill_last_name" value="" size="" /> </li> <li> <label for="bill_state">State:</label> <select name="bill_state" id="bill_state"> <option>-- Choose State --</option> <option value="RI">Rhode Island</option> <option value="VT">Vermont</option> <option value="CT">Connecticut</option> </select> </li> <li> <label for="bill_zip_code">Zip Code</label> <input type="text" name="bill_zip_code" id="bill_zip_code" value="" size="8" /> </li> <li> <input type="submit" name="" /> </li> </ul> </fieldset> </form> </div>

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

  • Oracle Certification and virtualization Solutions.

    - by scoter
    As stated in official MOS ( My Oracle Support ) document 249212.1 support for Oracle products on non-Oracle VM platforms follow exactly the same stance as support for VMware and, so, the only x86 virtualization software solution certified for any Oracle product is "Oracle VM". Based on the fact that: Oracle VM is totally free ( you have the option to buy Oracle-Support ) Certified is pretty different from supported ( OracleVM is certified, others could be supported ) With Oracle VM you may not require to reproduce your issue(s) on physical server Oracle VM is the only x86 software solution that allows hard-partitioning *** *** see details to these Oracle public links: http://www.oracle.com/technetwork/server-storage/vm/ovm-hardpart-168217.pdf http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf people started asking to migrate from third party virtualization software (ex. RH KVM, VMWare) to Oracle VM. Migrating RH KVM guest to Oracle VM. OracleVM has a built-in P2V utility ( Official Documentation ) but in some cases we can't use it, due to : network inaccessibility between hypervisors ( KVM and OVM ) network slowness between hypervisors (KVM and OVM) size of the guest virtual-disks Here you'll find a step-by-step guide to "manually" migrate a guest machine from KVM to OVM. 1. Verify source guest characteristics. Using KVM web console you can verify characteristics of the guest you need to migrate, such as: CPU Cores details Defined Memory ( RAM ) Name of your guest Guest operating system Disks details ( number and size ) Network details ( number of NICs and network configuration ) 2. Export your guest in OVF / OVA format.  The export from Redhat KVM ( kernel virtual machine ) will create a structured export of your guest: [root@ovmserver1 mnt]# lltotal 12drwxrwx--- 5 36 36 4096 Oct 19 2012 b8296fca-13c4-4841-a50f-773b5139fcee b8296fca-13c4-4841-a50f-773b5139fcee is the ID of the guest exported from RH-KVM [root@ovmserver1 mnt]# cd b8296fca-13c4-4841-a50f-773b5139fcee/[root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# ls -ltrtotal 12drwxr-x--- 4 36 36 4096 Oct 19  2012 masterdrwxrwx--- 2 36 36 4096 Oct 29  2012 dom_mddrwxrwx--- 4 36 36 4096 Oct 31  2012 images images contains your virtual-disks exported [root@ovmserver1 b8296fca-13c4-4841-a50f-773b5139fcee]# cd images/[root@ovmserver1 images]# ls -ltratotal 16drwxrwx--- 5 36 36 4096 Oct 19  2012 ..drwxrwx--- 2 36 36 4096 Oct 31  2012 d4ef928d-6dc6-4743-b20d-568b424728a5drwxrwx--- 2 36 36 4096 Oct 31  2012 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1drwxrwx--- 4 36 36 4096 Oct 31  2012 .[root@ovmserver1 images]# cd d4ef928d-6dc6-4743-b20d-568b424728a5/[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# ls -ltotal 5169092-rwxr----- 1 36 36 187904819200 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1-rw-rw---- 1 36 36          341 Oct 31  2012 4c03b1cf-67cc-4af0-ad1e-529fd665dac1.meta[root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# file 4c03b1cf-67cc-4af0-ad1e-529fd665dac14c03b1cf-67cc-4af0-ad1e-529fd665dac1: LVM2 (Linux Logical Volume Manager) , UUID: sZL1Ttpy0vNqykaPahEo3hK3lGhwspv 4c03b1cf-67cc-4af0-ad1e-529fd665dac1 is the first exported disk ( physical volume ) [root@ovmserver1 d4ef928d-6dc6-4743-b20d-568b424728a5]# cd ../4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# ls -ltotal 5568076-rwxr----- 1 36 36 107374182400 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a-rw-rw---- 1 36 36          341 Oct 31  2012 9020f2e1-7b8a-4641-8f80-749768cc237a.meta[root@ovmserver1 4b241ea0-43aa-4f3b-ab7d-2fc633b491a1]# file 9020f2e1-7b8a-4641-8f80-749768cc237a9020f2e1-7b8a-4641-8f80-749768cc237a: x86 boot sector; partition 1: ID=0x83, active, starthead 1, startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; startsector 63, 401562 sectors; partition 2: ID=0x82, starthead 0, startsector 401625, 65529135 sectors; partition 3: ID=0x83, starthead 254, startsector 65930760, 8385930 sectors; partition 4: ID=0x5, starthead 254, startsector 74316690, 135395820 sectors, code offset 0x48 9020f2e1-7b8a-4641-8f80-749768cc237a is the second exported disk, with partition 1 bootable 3. Prepare the new guest on Oracle VM. By Ovm-Manager we can prepare the guest where we will move the exported virtual-disks; under the Tab "Servers and VMs": click on  and create your guest with parameters collected before (point 1): - add NICs on different networks: - add virtual-disks; in this case we add two disks of 1.0 GB each one; we will extend the virtual disk copying the source KVM virtual-disk ( see next steps ) - verify virtual-disks created ( under Repositories tab ) 4. Verify OVM virtual-disks names. [root@ovmserver1 VirtualMachines]# grep -r hyptest_rdbms * 0004fb0000060000a906b423f44da98e/vm.cfg:OVM_simple_name = 'hyptest_rdbms' [root@ovmserver1 VirtualMachines]# cd 0004fb0000060000a906b423f44da98e [root@ovmserver1 0004fb0000060000a906b423f44da98e]# more vm.cfgvif = ['mac=00:21:f6:0f:3f:85,bridge=0004fb001089128', 'mac=00:21:f6:0f:3f:8e,bridge=0004fb00101971d'] OVM_simple_name = 'hyptest_rdbms' vnclisten = '127.0.0.1' disk = ['file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/ VirtualDisks/0004fb000012000097c1bfea9834b17d.img,xvda,w', 'file:/OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img,xvdb,w'] vncunused = '1' uuid = '0004fb00-0006-0000-a906-b423f44da98e' on_reboot = 'restart' cpu_weight = 27500 memory = 32768 cpu_cap = 0 maxvcpus = 8 OVM_high_availability = True maxmem = 32768 vnc = '1' OVM_description = '' on_poweroff = 'destroy' on_crash = 'restart' name = '0004fb0000060000a906b423f44da98e' guest_os_type = 'linux' builder = 'hvm' vcpus = 8 keymap = 'en-us' OVM_os_type = 'Oracle Linux 5' OVM_cpu_compat_group = '' OVM_domain_type = 'xen_hvm' disk2 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img disk1 ovm ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img Summarizing disk1 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a disk1 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb000012000097c1bfea9834b17d.img disk2 --source ==> /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 disk2 --dest ==> /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/ 0004fb0000120000cde6a11c3cb1d0be.img 5. Copy KVM exported virtual-disks to OVM virtual-disks. Keeping your Oracle VM guest stopped you can copy KVM exported virtual-disks to OVM virtual-disks; what I did is only to locally mount the filesystem containing the exported virtual-disk ( by an usb device ) on my OVS; the copy automatically resize OVM virtual-disks ( previously created with a size of 1GB ) . nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/4b241ea0-43aa-4f3b-ab7d-2fc633b491a1/9020f2e1-7b8a-4641-8f80-749768cc237a /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb000012000097c1bfea9834b17d.img & nohup cp /mnt/b8296fca-13c4-4841-a50f-773b5139fcee/images/d4ef928d-6dc6-4743-b20d-568b424728a5/4c03b1cf-67cc-4af0-ad1e-529fd665dac1 /OVS/Repositories/0004fb00000300004f17b7368139eb41/VirtualDisks/0004fb0000120000cde6a11c3cb1d0be.img & 7. When copy completed refresh repository to aknowledge the new-disks size. 7. After "refresh repository" is completed, start guest machine by Oracle VM manager. After the first start of your guest: - verify that you can see all disks and partitions - verify that your guest is network reachable ( MAC Address of your NICs changed ) Eventually you can also evaluate to convert your guest to PVM ( Paravirtualized virtual Machine ) following official Oracle documentation. Ciao Simon COTER ps: next-time I'd like to post an article reporting how to manually migrate Virtual-Iron guests to OracleVM.  Comments and corrections are welcome. 

    Read the article

  • MEF + Plug-In not updating

    - by mybrokengnome
    I asked this on the MEF Codeplex forum already, but I haven't gotten a response yet, so I figured I'd try StackOverflow. Here's the original post if anyone's interested (this is just a copy from it): MEF Codeplex "Let me first say that I'm completely new to MEF (just discovered it today) and am very happy with it so far. However, I've ran in to a problem that is very frustrating. I'm creating an app that will have a plugin architecture and the plugins will only be stored in a single DLL file (or coded into the main app). The DLL file needs to be able to be recompiled during run-time and the app should recognize this and re-load the plugins (I know this is difficult, but it's a requirement). To accomplish this I took the approach covered http://blog.maartenballiauw.be/category/MEF.aspx there (look for WebServerDirectoryCatalog). Basically the idea is to "monitor the plugins folder, copy the new/modified assemblies to the web application’s /bin folder and instruct MEF to load its exports from there." This is my code, which is probably not the correct way to do it but it's what I found in some samples around the net: main()... string myExecName = Assembly.GetExecutingAssembly().Location; string myPath = System.IO.Path.GetDirectoryName(myExecName); catalog = new AggregateCatalog(); pluginCatalog = new MyDirectoryCatalog(myPath + @"/Plugins"); catalog.Catalogs.Add(pluginCatalog); exportContainer = new CompositionContainer(catalog); CompositionBatch compBatch = new CompositionBatch(); compBatch.AddPart(this); compBatch.AddPart(catalog); exportContainer.Compose(compBatch); and private FileSystemWatcher fileSystemWatcher; public DirectoryCatalog directoryCatalog; private string path; private string extension; public MyDirectoryCatalog(string path) { Initialize(path, "*.dll", "*.dll"); } private void Initialize(string path, string extension, string modulePattern) { this.path = path; this.extension = extension; fileSystemWatcher = new FileSystemWatcher(path, modulePattern); fileSystemWatcher.Changed += new FileSystemEventHandler(fileSystemWatcher_Changed); fileSystemWatcher.Created += new FileSystemEventHandler(fileSystemWatcher_Created); fileSystemWatcher.Deleted += new FileSystemEventHandler(fileSystemWatcher_Deleted); fileSystemWatcher.Renamed += new RenamedEventHandler(fileSystemWatcher_Renamed); fileSystemWatcher.IncludeSubdirectories = false; fileSystemWatcher.EnableRaisingEvents = true; Refresh(); } void fileSystemWatcher_Renamed(object sender, RenamedEventArgs e) { RemoveFromBin(e.OldName); Refresh(); } void fileSystemWatcher_Deleted(object sender, FileSystemEventArgs e) { RemoveFromBin(e.Name); Refresh(); } void fileSystemWatcher_Created(object sender, FileSystemEventArgs e) { Refresh(); } void fileSystemWatcher_Changed(object sender, FileSystemEventArgs e) { Refresh(); } private void Refresh() { // Determine /bin path string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Plugins"); string newPath = ""; // Copy files to /bin foreach (string file in Directory.GetFiles(path, extension, SearchOption.TopDirectoryOnly)) { try { DirectoryInfo dInfo = new DirectoryInfo(binPath); DirectoryInfo[] dirs = dInfo.GetDirectories(); int count = dirs.Count() + 1; newPath = binPath + "/" + count; DirectoryInfo dInfo2 = new DirectoryInfo(newPath); if (!dInfo2.Exists) dInfo2.Create(); File.Copy(file, System.IO.Path.Combine(newPath, System.IO.Path.GetFileName(file)), true); } catch { // Not that big deal... Blog readers will probably kill me for this bit of code :-) } } // Create new directory catalog directoryCatalog = new DirectoryCatalog(newPath, extension); directoryCatalog.Refresh(); } public override IQueryable<ComposablePartDefinition> Parts { get { return directoryCatalog.Parts; } } private void RemoveFromBin(string name) { string binPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ""); File.Delete(Path.Combine(binPath, name)); } So all this actually works, and after the end of the code in main my IEnumerable variable is actually filled with all the plugins in the DLL (which if you follow the code is located in Plugins/1 so that I can modify the dll in the plugins folder). So now at this point I should be able to re-compile the plugins DLL, drop it in to the Plugins folder, my FileWatcher detect that it's changed, and then copy it into folder "2" and directoryCatalog should point to the new folder. All this actually works! The problem is, even though it seems like every thing is pointed to the right place, my IEnumerable variable is never updated with the new plugins. So close, but yet so far! Any suggestions? I know the downsides of doing it this way, that no dll is actually getting unloaded and causing a memory leak, but it's a Windows App and will probably be started at least once a day, and the plugins are un-likely to change that often, but it's still a requirement from the client that it does this without re-loading the app. Thanks! Thanks for any help you all can provide, it's driving me crazy not being able to figure this out."

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • SOA 10g Developing a Simple Hello World Process

    - by [email protected]
    Softwares & Hardware Needed Intel Pentium D CPU 3 GHz, 2 GB RAM, Windows XP System ( Thats what i am using ) You could as well use Linux , but please choose High End RAM 10G SOA Suite from Oracle(TM) , Read Installation documents at www.Oracle.com J Developer 10.1.3.3 Official Documents at http://www.oracle.com/technology/products/ias/bpel/index.html java -version Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode)BPEL Introduction - Developing a Simple Hello World Process  Synchronous BPEL Process      This Exercise focuses on developing a Synchronous Process, which mean you give input to the BPEL Process you get output immediately no waiting at all. The Objective of this exercise is to give input as name and it greets with Hello Appended by that name example, if I give input as "James" the BPEL process returns "Hello James". 1. Open the Oracle JDeveloper click on File -> New Application give the name "JamesApp" you can give your own name if it pleases you. Select the folder where you want to place the application. Click "OK" 2. Right Click on the "JamesApp" in the Application Navigator, Select New Menu. 3. Select "Projects" under "General" and "BPEL Process Project", click "OK" these steps remain same for all BPEL Projects 4. Project Setting Wizard Appears, Give the "Process Name" as "MyBPELProc" and Namespace as http://xmlns.james.com/ MyBPELProc, Select Template as "Synchronous BPEL Process click "Next" 5. Accept the input and output schema names as it is, click "Finish" 6. You would see the BPEL Process Designer, some of the folders such as Integration content and Resources are created and few more files 7. Assign Activity : Allows Assigning values to variables or copying values of one variable to another and also do some string manipulation or mathematical operations In the component palette at extreme right, select Process Activities from the drop down, and drag and drop "Assign" between "receive Input" and "replyOutput" 8. You can right click and edit the Assign activity and give any suitable name "AssignHello", 9. Select "Copy Operation" Tab create "Copy Operation" 10. In the From variables click on expression builder, select input under "input variable", Click on insert into expression bar, complete the concat syntax, Note to use "Ctrl+space bar" inside expression window to Auto Populate the expression as shown in the figure below. What we are actually doing here is concatenating the String "Hello ", with the variable value received through the variable named "input" 11. Observe that once an expression is completed the "To Variable" is assigned to a variable by name "result" 12. Finally the copy variable looks as below 13. It's the time to deploy, start the SOA Suite 14. Establish connection to the Server from JDeveloper, this can be done adding a New Application Server under Connection, give the server name, username and password and test connection. 15. Deploy the "MyBPELProc" to the "default domain" 16. http://localhost:8080/ allows connecting to SOA Suite web portal, click on "BPEL Control" , login with the username "oc4jadmin" password what ever you gave during installation 17. "MyBPELProc" is visisble under "Deployed BPEL Processes" in the "Dashboard" Tab, click on the it 18. Initiate tab open to accept input, enter data such as input is "James" click on "Post XML Button" 19. Click on Visual Flow 20. Click on receive Input , it shows "James" as input received 21. Click on reply Output, it shows "Hello James" so the BPEL process is successfully executed. 22. It may be worth seeing all the instance created everytime a BPEL process is executed by giving some inputs. Purge All button allows to delete all the unwanted previous instances of BPEL process, dont worry it wont delete the BPEL process itself :-) 23. It may also be some importance to understand the XSD File which holds input & output variable names & data types. 24. You could drag n drop variables as elements over sequence at the designer or directly edit the XML Source file. 

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • Preview Before You Paste with Live Preview in Office 2010

    - by DigitalGeekery
    Do you often find yourself frustrated that content you just copied and pasted didn’t turn out the way you expected? With the new Live Preview in Office 2010, you can preview how copied content will look when it’s pasted even between Office applications. Not every paste preview option will be available in every circumstance. The available options will be based on the applications being used and what content is copied. Copy your content like normal by right-clicking and selecting Copy, pressing Crtl + C, or selecting Copy from the Home tab. Next, select your location to paste the content. Now you can access the Paste Preview buttons either by selecting the Paste dropdown list from the Home tab…   …Or by right-clicking. As you hover your cursor over each of the Paste Options buttons, you will see a preview of what it will look like if you paste using that option. Click the corresponding button when you find the paste option you like. The “Paste” will paste all the content and formatting as you can see below. Values will paste values only, no formatting.   Formatting will paste only the formatting, no values. Hover over Paste Special to reveal any additional paste options. The process is similar in other Office applications. As you can see in the Word document below, Keep Text Only will paste the text, but not the orange color format from the original text.   Even after you’ve pasted, there is still time to change your mind. After you paste content you’ll see a Paste Option button near your content. If you don’t, you can pull it up by pressing the Ctrl key. Note: This is also available after using Ctrl + V to paste. Click to enable the dropdown and select one of the available options.   Using Live Paste Preview between multiple applications is just as easy. If we preview pasting the content from our Word document into PowerPoint by using the Keep Source Formatting option, we’ll see that the outcome looks awful. Selecting the Use Destination Theme will merge the text into the theme of the PowerPoint document and looks a lot better on our slide.   Live Paste Preview is a nice addition to Office 2010 and is sure to save time spent undoing the unexpected consequences of pasting content. Looking for more Office 2010 tips? Check out some of our other Office 2010 posts like how to create a customized tab on the Office 2010 ribbon, and how to use the streamlined printing features in Office 2010. Similar Articles Productive Geek Tips Edit Microsoft Word 2007 Documents in Print PreviewPreview Documents Without Opening Them In Word 2007How to See Where a TinyUrl Is Really Linking ToHow To Upload Office 2010 Documents to Web Apps Technical PreviewPreview Links and Images in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • SQLAuthority News – Follow up on – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    Last month I had a fantastic time with lots of puzzles and brain teasers, the amount of participation which I have received on the blog is indeed inspiring to write more. One of the blog post was about how to replace a column name in all the stored procedures. The article had very interesting conversation as a follow up. Please read the original article Replace a Column Name in Multiple Stored Procedure all together before reading this blog further as they are connected. Let us start few of the interesting comments. SQL Server Expert Imran Mohammed had a wonderful first and excellent note. I suggest all of you to read it. Imran stresses on the Data Modelling and Logical as well as Physical Design. Developers must create a logical design and get approval on naming convention, data types, references, constraints, indexes etc. He further suggested that one should not cut steps but must follow all the industry standards and guidelines. Here extended my blog post with following note – “Extending Pinal’s answer, what you can do is go to database properties, all tasks, scripts objects, in scripting wizard select all the stored procedure for which you want to change column name, export the query to a new window and then do find and replace, all in once window and execute the script. But make sure you check what you are replacing, sometimes column names are also used in table names, for ex:Table Name: Product and Column Name: ProductId, ProductName”. Thanks Imran Great Points!  Gatej Alexandru suggested that it is not good idea to DROP or CREATE but rather use ALTER as quite possible there may be permissions issue as well. Very good point let me see if I can write blog post over it. Vinay Kumar and SQLStudent144 have proposed another method to achieve the same. I am combining their solution and writing them here. Step 1. Press Ctrl+T or change “Result to Text” mode. Step 2. Execute below commands.SELECT 'EXEC sp_helptext [' + referencing_schema_name + '.' + referencing_entity_name + ']' FROM sys.dm_sql_referencing_entities('schema.objectname','OBJECT') Where schema.objectname is the object or table you are searching for. Step 3. Now copy the result and paste in new window. Again Press Ctrl+T or change “Result to Text” mode. Step 4. Copy the result and paste in new window. Execute the query. Step 5. Copy the result and paste in new window. Step 6. Now find your searching text in the script, make necessary changes and execute this script. Do not forget to remove the code which is generated in resultset which are not relevant to the T-SQL Script. Digitqr suggest we can do this for other objects besides Stored Procedure as well. Iosif suggests to use tool SQL Search from RedGate. I guess this sums it well. We have an alternative perspective to our original issue of replacing the column name in multiple stored procedure. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

  • Migrating SQL Server Compact Edition (SQL CE) database to SQL Server using Web Matrix

    - by Harish Ranganathan
    One of the things that is keeping us busy is the Web Camps we are delivering across 5 cities.  If you are a reader of this blog, and also attended one of these web camps, there is a good chance that you have seen me since I was there in all the places, so far.  The topics that we cover include Visual Studio 2010 SP1, SQL CE, ASP.NET MVC & HTML5.  Whenever I talk about SQL CE, the immediate response is that, people are wow that Microsoft has shipped a FREE compact edition database, which is an embedded database that can be x-copy deployed.  If you think, well didn’t Microsoft ship SQL Express which is FREE?  The difference is that, SQL Express runs as a service in the machine (if you open SQL Configuration Manager, you can notice that SQL Express is running as a service along with your SQL Server Engine (if you have installed ).  This makes it that, even if you are willing to use SQL Express when you deploy your application, it needs to be installed on the production machine (hosting provider) and it needs to run as a service.  Many hosters don’t allow such services to run on their space. SQL CE comes as a x-Copy deploy-able database with just a few DLLs required to run it on the machine and they don’t even need to be installed in GAC on the production machine.  In fact, if you have Visual Studio 2010 SP1 installed, you can use the “Add Deployable Dependencies” option in Project-Properties and it would detect that SQL CE is something you would probably want to add as a deploy-able dependency for your project.  With that, it bundles the required DLLs as a part of the “_bin_deployableAssemblies” folder.  So your project can be x-Copy deployed and just works fine. However, SQL CE has the limit of 4GB storage space.  Real world applications often require more than just 4GB of data storage and it often turns out that people would like to use SQL CE for development/ramp up stages but would like to migrate to full fledged SQL Server after a while.  So, its only natural that the question arises “How do I move my SQL CE database to SQL Server”  And honestly, it doesn’t come across as a straight forward support.  I was talking to Ambrish Mishra (PM in SQL CE Team, Hyderabad) since I got this question in almost all the places where we talked about SQL CE.   He was kind enough to demonstrate how this can be accomplished using Web Matrix.  Open Web Matrix (Web Matrix can be installed for free from www.microsoft.com/web) and click on “Site from Template” Click on the “Bakery” template (since by default it uses a SQL CE database and has all the required sample data) and click “Ok”. In the project, you can navigate to the Database tab and will be able to find that the Bakery site uses a SQL CE database “bakery.sdf” Select the “bakery.sdf” and you will be able to see the “Migrate” button on the top right Once you click on the “Migrate” button, you will notice that the popup wizard opens up and by default is configured for SQL Express.  You can edit the same to point to your local SQL Server instance, or a remote server. Upon filling in the Server Name, Username and Password, when you click “Ok”, couple of things happen.  1. The database is migrated to SQL Server (local or remote – subject to permissions on remote server).   You can open up SQL Server Management Studio and connect to the server to verify that the “bakery” database exists under “Databases” node. 2. You can also notice that in Web Matrix, when you navigate to the “Files” tab and open up the web.config file, connection string now points to the SQL Server instance (yes, the Migrate button was smart enough to make this change too ) And there it is, your SQL Server Compact Edition database, now migrated to SQL Server!! In a future post, I would explain the steps involved when using Visual Studio. Cheers !!!

    Read the article

  • Useful SVN and Git commands – Cheatsheet

    - by Madhan ayyasamy
    The following snippets will helpful one who user version control systems like Git and SVN.svn checkout/co checkout-url – used to pull an SVN tree from the server.svn update/up – Used to update the local copy with the changes made in the repository.svn commit/ci – m “message” filename – Used to commit the changes in a file to repository with a message.svn diff filename – shows up the differences between your current file and what’s there now in the repository.svn revert filename – To overwrite local file with the one in the repository.svn add filename – For adding a file into repository, you should commit your changes then only it will reflect in repository.svn delete filename – For deleting a file from repository, you should commit your changes then only it will reflect in repository.svn move source destination – moves a file from one directory to another or renames a file. It will effect your local copy immediately as well as on the repository after committing.git config – Sets configuration values for your user name, email, file formats and more.git init – Initializes a git repository – creates the initial ‘.git’ directory in a new or in an existing project.git clone – Makes a Git repository copy from a remote source. Also adds the original location as a remote so you can fetch from it again and push to it if you have permissions.git add – Adds files changes in your working directory to your index.git rm – Removes files from your index and your working directory so they will not be tracked.git commit – Takes all of the changes written in the index, creates a new commit object pointing to it and sets the branch to point to that new commit.git status – Shows you the status of files in the index versus the working directory.git branch – Lists existing branches, including remote branches if ‘-a’ is provided. Creates a new branch if a branch name is provided.git checkout – Checks out a different branch – switches branches by updating the index, working tree, and HEAD to reflect the chosen branch.git merge – Merges one or more branches into your current branch and automatically creates a new commit if there are no conflicts.git reset – Resets your index and working directory to the state of your last commit.git tag – Tags a specific commit with a simple, human readable handle that never moves.git pull – Fetches the files from the remote repository and merges it with your local one.git push – Pushes all the modified local objects to the remote repository and advances its branches.git remote – Shows all the remote versions of your repository.git log – Shows a listing of commits on a branch including the corresponding details.git show – Shows information about a git object.git diff – Generates patch files or statistics of differences between paths or files in your git repository, or your index or your working directory.gitk – Graphical Tcl/Tk based interface to a local Git repository.

    Read the article

  • Customize SharePoint list using InfoPath2010 form Part4

    - by ybbest
    Customize SharePoint list using InfoPath2010 form Part1 Customize SharePoint list using InfoPath2010 form Part2 Customize SharePoint list using InfoPath2010 form Part3 In this post, I’d like to show you how to create print functionality in InfoPath for SharePoint list. The print functionality is provided out of box in InfoPath form library; however it is not available in SharePoint list. Here are the steps to create the print functionality.You can download the new form here. 1. Create print page in the list by first copy and paste the displayifs.aspx and rename the file to Printifs.aspx. 2. Open the page in the SharePoint designer and copy the following javascript to the PlaceHolderTitleAreaClass ContentPlaceHolder. <script type="text/javascript"> $(document).ready(function(){ $("[id^='Ribbon']").hide(); $(".s4-title").hide(); $("[id='s4-leftpanel']").hide(); $("[id='s4-ribbonrow']").hide(); $("[id='s4-titlerow']").hide(); $("[id='s4-titlerow']").css("height", "0px"); $("body").css("background-color", "white"); $("body").css("zoom", "135%"); $("[id='MSO_ContentTable']").css("margin-left", "0px"); $("[id='MT-BodyContent']").css("width", "900px"); $(".MT-BodyArea").css("width", "900px"); $("[id='MT-Layout']").css("width", "900px"); $(".ms-bodyareacell").css("width", "900px"); $(".s4-wpTopTable").css("border", "none"); $("[id$='XmlFormView']").css("margin-left", "-80px"); $("body").css("margin-top", "-30px"); $(":contains('CAPEX')").css("border", "5px solid #FFCC00"); window.print(); }); </script> 3. Open InfoPath form for the list and create a field called PrintLink 4. Set the default value of printLink that points to the print page I just created before with the query string id.You can download the formula for the default value here. 5. Add a new image that looks like Print button on the display view, then I can set the url to the Print link Field. (The reason I did not use button is that you cannot set the navigate url for the button). 6.Set the url of the image to the PrintLInk field. 7.Next , create the print view. 8. Copy the contents from the display view to print view 9. Finally, go to the printifs.aspx and edit the InfoPath web part to set the view to PrintView. 9. Republish you form you will see the form as shown below 10. If you click the Print button, you will see the print page and print dialog,you can also add the company logo in the print page using css as well. 11.To deploy the customization,you can use the backup and restore content database approach , you can get more details from my previous blog post here.

    Read the article

  • Implementing Search for BlogReader Windows 8 Sample

    - by Harish Ranganathan
    The BlogReader sample is an excellent place to start speeding up your Windows 8 development skills.  The tutorial is available here and the complete source code is available here Create a project called WindowsBlogReader and create pages for ItemsPage.xaml, SplitPage.xaml and DetailPage.xaml and copy the corresponding code blocks from the sample listed above. Created a class file FeedData.cs and copy the code.  Finally, create a class DateConverter.cs and copy the code associated with it. With that you should be able to build and run the project.  There seems to be one issue in the sample feeds listed that the first week (feed1) doesn’t seem to expose it.  So you can skip that and use the second feed as first feed.  You will end up with one feed less but it works. I had demonstrated this in the recent TechDays at Chennai.  How we can use the Search Contract and implement Search for within the Blog Titles. First off, we need to declare that the App will be using Search Contract, in the Package.appmanifest file Next, we would need a handle of the Search Contract when user types on the search window in Charms Menu. If you had completed the code sample from the link above, you would have ItemsPage.xaml and ItemsPage.xaml.cs.  Open the ItemsPage.xaml.cs. Import the namespaces using System.Collections.ObjectModel and System.Linq. in the ItemsPage() constructor, right after this.InitializeComponent(); add the following code Windows.ApplicationModel.Search.SearchPane.GetForCurrentView().QuerySubmitted += ItemsPage_QuerySubmitted; This event is fired when users open up the Search Panel from Charms Menu, type something and hit enter. We need to handle this event declared in the delegate.  For that we need to pull the FeedDataSource instantiation to the root of the class to make it global. So, add the following as the first line within the partial class FeedDataSource feedDataSource; Also, modify the LoadState method, as follows:- protected override void LoadState(Object navigationParameter, Dictionary<String, Object> pageState)        {            feedDataSource = (FeedDataSource)App.Current.Resources["feedDataSource"];            if (feedDataSource != null)            {                this.DefaultViewModel["Items"] = feedDataSource.Feeds;            }        } Next is to implement the ItemsPage_QuerySubmitted method void ItemsPage_QuerySubmitted(Windows.ApplicationModel.Search.SearchPane sender, Windows.ApplicationModel.Search.SearchPaneQuerySubmittedEventArgs args)         {             this.DefaultViewModel["Items"] = from dynamic item in feedDataSource.Feeds                                              where                                              item.Title.Contains(args.QueryText)                                              select item;         } As you can see we are almost using the same defaultviewmodel with the change that we are using a linq query to do a search on feeds which has the Title that matches QueryText. With this we are ready to run the app. Run the App.  Hit the Charms Menu with Windows + C key combination and type a text to search within the blog. You can see that it filters the Blogs which has the matching text. We can modify the above Linq query to do a search for the Text in other attributes like description, actual blog content etc., I have uploaded the complete code since the original WindowsBlogReader Code is not available for download.  You can download it from here note:  this code is provided as-is without any warranties.  Cheers!!!

    Read the article

  • What C++ coding standard do you use?

    - by gablin
    For some time now, I've been unable to settle on a coding standard and use it concistently between projects. When starting a new project, I tend to change some things around (add a space there, remove a space there, add a line break there, an extra indent there, change naming conventions, etc.). So I figured that I might provide a piece of sample code, in C++, and ask you to rewrite it to fit your standard of coding. Inspiration is always good, I say. ^^ So here goes: #ifndef _DERIVED_CLASS_H__ #define _DERIVED_CLASS_H__ /** * This is an example file used for sampling code layout. * * @author Firstname Surname */ #include <stdio> #include <string> #include <list> #include "BaseClass.h" #include "Stuff.h" /** * The DerivedClass is completely useless. It represents uselessness in all its * entirety. */ class DerivedClass : public BaseClass { //////////////////////////////////////////////////////////// // CONSTRUCTORS / DESTRUCTORS //////////////////////////////////////////////////////////// public: /** * Constructs a useless object with default settings. * * @param value * Is never used. * @throws Exception * If something goes awry. */ DerivedClass (const int value) : uselessSize_ (0) {} /** * Constructs a copy of a given useless object. * * @param object * Object to copy. * @throws OutOfMemoryException * If necessary data cannot be allocated. */ ItemList (const DerivedClass& object) {} /** * Destroys this useless object. */ ~ItemList (); //////////////////////////////////////////////////////////// // PUBLIC METHODS //////////////////////////////////////////////////////////// public: /** * Clones a given useless object. * * @param object * Object to copy. * @return This useless object. */ DerivedClass& operator= (const DerivedClass& object) { stuff_ = object.stuff_; uselessSize_ = object.uselessSize_; } /** * Does absolutely nothing. * * @param useless * Pointer to useless data. */ void doNothing (const int* useless) { if (useless == NULL) { return; } else { int womba = *useless; switch (womba) { case 0: cout << "This is output 0"; break; case 1: cout << "This is output 1"; break; case 2: cout << "This is output 2"; break; default: cout << "This is default output"; break; } } } /** * Does even less. */ void doEvenLess () { int mySecret = getSecret (); int gather = 0; for (int i = 0; i < mySecret; i++) { gather += 2; } } //////////////////////////////////////////////////////////// // PRIVATE METHODS //////////////////////////////////////////////////////////// private: /** * Gets the secret value of this useless object. * * @return A secret value. */ int getSecret () const { if ((RANDOM == 42) && (stuff_.size() > 0) || (1000000000000000000 > 0) && true) { return 420; } else if (RANDOM == -1) { return ((5 * 2) + (4 - 1)) / 2; } int timer = 100; bool stopThisMadness = false; while (!stopThisMadness) { do { timer--; } while (timer > 0); stopThisMadness = true; } } //////////////////////////////////////////////////////////// // FIELDS //////////////////////////////////////////////////////////// private: /** * Don't know what this is used for. */ static const int RANDOM = 42; /** * List of lists of stuff. */ std::list <Stuff> stuff_; /** * Specifies the size of this object's uselessness. */ size_t uselessSize_; }; #endif

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Simple Navigation In Windows Phone 7

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! When moving to the mobile platform all applications need to be able to provide different views.  Navigating around views in Windows Phone 7 is a very easy thing to do.  This post will introduce you to the simplest technique for navigation in Windows Phone 7 apps. Steps: 1.     Create a new Windows Phone Application project. 2.     In the MainPage.xaml file copy the following xaml into the ContentGrid Grid:             <StackPanel Orientation="Vertical" VerticalAlignment="Center"  >                 <TextBox Name="ValueTextBox" Width="200" ></TextBox>                 <Button Width="200" Height="30" Content="Next Page" Click="Button_Click"></Button>             </StackPanel> This gives a text box for the user to enter text and a button to navigate to the next page. 3.     Copy the following event handler code to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.Navigate(new Uri( string.Format("/SecondPage.xaml?val={0}", ValueTextBox.Text), UriKind.Relative));         }   The event handler uses the NavigationService.Navigate() function.  This is what makes the navigation to another page happen.  The function takes a Uri parameter with the name of the page to navigate to and the indication that it is a relative Uri to the current page.  Note also the querystring is formatted with the value entered in the ValueTextBox control – in a similar manner to a standard web querystring. 4.     Add a new Windows Phone Portrait Page to the project named SecondPage.xaml. 5.     Paste the following XAML in the ContentGrid Grid in SecondPage.xaml:             <Button Name="GoBackButton" Width="200" Height="30" Content="Go Back" Click="Button_Click"></Button>   This provides a button to navigate back to the first page. 6.     Copy the following event handler code to the SecondPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {             NavigationService.GoBack();         } This tells the application to go back to the previously displayed page. 7.     Add the following code to the constructor in SecondPage.xaml.cs:             this.Loaded += new RoutedEventHandler(SecondPage_Loaded); 8.     Add the following loaded event handler to the SecondPage.xaml.cs file:         void SecondPage_Loaded(object sender, RoutedEventArgs e)         {             if (NavigationContext.QueryString["val"].Length > 0)                 MessageBox.Show(NavigationContext.QueryString["val"], "Data Passed", MessageBoxButton.OK);             else                 MessageBox.Show("{Empty}!", "Data Passed", MessageBoxButton.OK);         }   This code pops up a message box displaying either the text entered on the first page or the message “{Empty}!” if no text was entered. 9.     Run the application, enter some text in the text box and click on the next page button to see the application in action:   Congratulations!  You have created a new Windows Phone 7 application with page navigation.

    Read the article

  • West Palm Beach Developer Group March 2012 Meeting Recap

    - by Sam Abraham
    Our West Palm Beach Developer Group March 2012 meeting featured Herve Roggero, Azure MVP and co-founder of Pyn Logic.  Herve covered multiple case studies on migrating existing application to SQL Azure.  This event was quiet popular filling-in our meeting room. We would like to thank PC Professor for hosting our meeting and Sherlock Technology for sponsoring our free food. Our April 24th, 2012 meeting will feature Will Tartak who will be demonstrating how he used ServiceStack.net to quickly develop Windows Phone and Android applications. For more information and to register, please visit: http://www.fladotnet.com/Reg.aspx?EventID=585 Below are some photos of our March event: .

    Read the article

  • O&rsquo;Reilly Deals to 9/June/2014 05:00 PT&ndash;50% off E-Books on Regular Expressions

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/06/orsquoreilly-deals-to-9june2014-0500-ptndash50-off-e-books-on-regular.aspxUntil 9/June/2014 05:00 PT, O’Reilly are offering 50% off E-Books on Regular Expressions at http://shop.oreilly.com/category/deals/regular-expressions-owo.do?code=DEAL&imm_mid=0bd938&cmp=em-prog-books-videos-lp-dod_regex. “Regular expressions—powerful tools for manipulating text and data—are now standard features in most languages and tools. Yet despite their widespread availability and unparalleled power, regular expressions are frequently underused. With ebooks and videos from shop.oreilly.com, learn tips for matching, extracting, and transforming text and data. Today only, save 50% and discover the epic functionality of Reg Ex.”

    Read the article

  • Ever helpful Windows&hellip;

    - by John Breakwell
    I’m doing some troubleshooting for a relative and asked them to send me a zipped copy of their registry which they dutifully did. When I tried to extract the registry file, though, Windows jumped in the way and said “No”. This made sense as registry files are dangerous things in the hands of the ignorant. So I clicked the link to see if it would tell me how to get at the reg file but found the result less than helpful. So off to the Internet and found an excellent answer on how to get round this: Now on to the much harder part of fixing the original problems.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >