Search Results

Search found 13937 results on 558 pages for 'save money'.

Page 499/558 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • Silverlight Cream for May 08, 2010 -- #858

    - by Dave Campbell
    In this Issue: Phil Middlemiss, Jaime Rodriguez, Senthil Kumar, Mike Snow, DaveDev, Gergely Orosz, Kirupa, Cheryl Simmons, András Velvárt, Dan Wahlin, Michael D. Brown, and Ben Rush. Shoutouts: Erik Mork and crew have their latest up: This Week In Silverlight – Where’s the Tablet? Chris Rouw has a good link post and instructions on WCF RIA services: Deploying and Configuring Silverlight 4 and WCF RIA Services From SilverlightCream.com: Quick and Easy Sscalable Rounded Bevels Phil Middlemiss duplicates some bevel-edged rectangles in Blend, and they look great. Now you don't have to import all the other PhotoShop bits to get those things looking the way you want! A transparent Windows PHONE FAQ Jaime Rodriguez combined a bunch of information into a WP7 FAQ that he's going to keep up to date, so bookmark the page. He also has links to the Training Kit, on and offline versions. Windows Phone Developer Training Kit April Refresh is now available for Download Thanks to Senthil Kumar, I found out there is an April refresh of the WP7 Training kit at Channel 9 -- go get yours now --- I'll still be here when you get back! Silverlight Tip of the Day #16 – Working with IgnoreImageCache Mike Snow's Tip of the day #16 covers IgnoreImageCache and like many other things in life, until you read Mike's post you may be surprised at how it works. DoodlePad – A fun, free, sketching application for Windows Phone 7 DaveDev has a new WP7 App up that lets you or your kids 'Doodle' on the phone... could be a note, or could be a drawing... good post with all the links you need to get this cranked up on the emulator. Printing in Silverlight: Printing Charts and Auto Scaling Gergely Orosz's latest post is a very useful one on auto-scaling charts to fit a printed page and then getting them to print. Smoothly Scrolling a ListBox Check out the smooth scrolling Kirupa has on the ListBox near the top of his post... all good stuff... you wanna know how to do that! Plus... it's dead simple and all in Blend :) http://www.sparklingclient.com/wheres-the-silverlight-tablet/ Cheryl Simmons has a great tip up at the SilverlightSDK if you haven't burned through to figure it out yet ... changing the watermark on a DatePicker control... looks great! The story of a wicked bug András Velvárt tells a story of a bug that just defied logic or being found. Read how he tracked it down and what it actually was... could save you some time. Story learned: if I have a problem that bad, I'm calling András :) Text Trimming in Silverlight 4 Dan Wahlin gives a quick run-through of what TextBox trimming is, and then by a good real example... check it out and start using it in your projects. Enterprise Patterns with WCF RIA Services Michael D. Brown has an article in MSDN Magazine on RIA Services. Great information and link-packed article, with all the source avialable for download. Building Custom Players with the Silverlight Media Framework Ben Rush has a nice long tutorial on the Silverlight Media Framework up on the MSDN Magazine site ... lots of information in there. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • Creating and using VM Groups in VirtualBox

    - by Fat Bloke
    With VirtualBox 4.2 we introduced the Groups feature which allows you to organize and manage your guest virtual machines collectively, rather than individually. Groups are quite a powerful concept and there are a few nice features you may not have discovered yet, so here's a bit more information about groups, and how they can be used.... Creating a group Groups are just ad hoc collections of virtual machines and there are several ways of creating a group: In the VirtualBox Manager GUI: Drag one VM onto another to create a group of those 2 VMs. You can then drag and drop more VMs into that group; Select multiple VMs (using Ctrl or Shift and click) then  select the menu: Machine...Group; or   press Cmd+U (Mac), or Ctrl+U(Windows); or right-click the multiple selection and choose Group, like this: From the command line: Group membership is an attribute of the vm so you can modify the vm to belong in a group. For example, to put the vm "Ubuntu" into the group "TestGroup" run this command: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup" Deleting a Group Groups can be deleted by removing a group attribute from all the VMs that constitute that group. To do this via the command-line the syntax is: VBoxManage modifyvm "Ubuntu" --groups "" In the VirtualBox Manager, this is more easily done by right-clicking on a group header and selecting "Ungroup", like this: Multiple Groups Now that we understand that Groups are just attributes of VMs, it can be seen that VMs can exist in multiple groups, for example, doing this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup","/ProjectX","/ProjectY" Results in: Or via the VirtualBox Manager, you can drag VMs while pressing the Alt key (Mac) or Ctrl (other platforms). Nested Groups Just like you can drag VMs around in the VirtualBox Manager, you can also drag whole groups around. And dropping a group within a group creates a nested group. Via the command-line, nested groups are specified using a path-like syntax, like this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup/Linux" ...which creates a sub-group and puts the VM in it. Navigating Groups In the VirtualBox Manager, Groups can be collapsed and expanded by clicking on the carat to the left in the Group Header. But you can also Enter and Leave groups too, either by using the right-arrow/left-arrow keys, or by clicking on the carat on the right hand side of the Group Header, like this: . ..leading to a view of just the Group contents. You can Leave or return to the parent in the same way. Don't worry if you are imprecise with your clicking, you can use a double click on the entire right half of the Group Header to Enter a group, and the left half to Leave a group. Double-clicking on the left half when you're at the top will roll-up or collapse the group.   Group Operations The real power of Groups is not simply in arranging them prettily in the Manager. Rather it is about performing collective operations on them, once you have grouped them appropriately. For example, let's say that you are working on a project (Project X) where you have a solution stack of: Database VM, Middleware/App VM, and  a couple of client VMs which you use to test your app. With VM Groups you can start the whole stack with one operation. Select the Group Header, and choose Start: The full list of operations that may be performed on Groups are: Start Starts from any state (boot or resume) Start VMs in headless mode (hold Shift while starting) Pause Reset Close Save state Send Shutdown signal Poweroff Discard saved state Show in filesystem Sort Conclusion Hopefully we've shown that the introduction of VM Groups not only makes Oracle VM VirtualBox pretty, but pretty powerful too.  - FB 

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • Book review (Book 6) - Wikinomics

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here. The book I chose for November 2011 was: Wikinomics: How Mass Collaboration Changes Everything, by Don Tapscott   Why I chose this Book: I’ve heard a lot about this book - was one of the “must read” kind of business books (many of which are very “fluffy”) and supposedly deals with collaborating using technology - so I want to see what it says about collaborative efforts and how I can leverage them. What I learned: I really disliked this book. I’ve never been a fan of the latest “business book”, and sadly that’s what this felt like to me. A “business book” is what I call a work that has a fairly simple concept to get across, and then proceeds to use various made-up terms, analogies and other mechanisms to fill hundreds of pages doing it. This perception is at my own – the book is pretty old, and these things go stale quickly. The author’s general point (at least what I took away from it) was: Open Source is good, proprietary is bad. Collaboration is the hallmark of successful companies. In my mind, you can save yourself the trouble of reading this work if you get these two concepts down. Don’t get me wrong – open source is awesome, and collaboration is a good thing, especially in places where it fits. But it’s not a panacea as the author seems to indicate. For instance, he continuously uses the example of MySpace to show a “2.0” company, which I think means that you can enter text as well as read it on a web page. All well and good. But we all know what happened to MySpace, and of course he missed the point entirely about this new web environment: low barriers to entry often mean low barriers to exit. And the open, collaborative company being the best model – well, I think we all know a certain computer company famous for phones and music that is arguably quite successful, and is probably one of the most closed, non-collaborative (at least with its customers) on the planet. So that sort of takes away that argument. The reality of business is far more complicated. Collaboration is an amazing tool, and should be leveraged heavily. However, at the end of the day, after you do your research you need to pick a strategy and stick with it. Asking thousands of people to assist you in building your product probably will not work well. Open Source is great – but some proprietary products are quite functional as well, have a long track record, are well supported, and will probably be upgraded. Everything has its place, so use what works where it is needed. There is no single answer, sadly. So did I waste my time reading the book? Did I make a bad choice? Not at all! Reading the opinions and thoughts of others is almost always useful, and it’s important to consider opinions other than your own. If nothing else, thinking through the process either convinces you that you are wrong, or helps you understand better why you are right.

    Read the article

  • BizTalk 2009 - Architecture Decisions

    - by StuartBrierley
    In the first step towards implementing a BizTalk 2009 environment, from development through to live, I put forward a proposal that detailed the options available, as well as the costs and benefits associated with these options, to allow an informed discusion to take place with the business drivers and budget holders of the project.  This ultimately lead to a decision being made to implement an initial BizTalk Server 2009 environment using the Standard Edition of the product. It is my hope that in the long term, as projects require it and allow, we will be looking to implement my ideal recommendation of a multi-server enterprise level environment, but given the differences in cost and the likely initial work load for the environment this was not something that I could fully recommend at this time.  However, it must be noted that this decision was made in full awareness of the limits of the standard edition, and the business drivers of this project were made fully aware of the risks associated with running without the failover capabilities of the enterprise edition. When considering the creation of this new BizTalk Server 2009 environment, I have also recommended the creation of the following pre-production environments:   Usage Environment Development Development of solutions; Unit testing against technical specifications; Initial load testing; Testing of deployment packages;  Visual Studio; BizTalk; SQL; Client PCs/Laptops; Server environment similar to Live implementation; Test Testing of Solutions against business and technical requirements;  BizTalk; SQL; Server environment similar to Live implementation; Pseudo-Live As Live environment to allow testing against Live implementation; Acts as back-up hardware in case of failure of Live environment; BizTalk; SQL; Server environment identical to Live implementation; The creation of these differing environments allows for the separation of the various stages of the development cycle.  The development environment is for use when actively developing a solution, it is a potentially volatile environment whose state at any given time can not be guaranteed.  It allows developers to carry out initial tests in an environment that is similar to the live environment and also provides an area for the testing of deployment packages prior to any release to the test environment. The test environment is intended to be a semi-volatile environment that is similar to the live environment.  It will change periodically through the development of a solution (or solutions) but should be otherwise stable.  It allows for the continued testing of a solution against requirements without the worry that the environment is being actively changed by any ongoing development.  This separation of development and test is crucial in ensuring the quality and control of the tested solution. The pseudo-live environment should be considered to be an almost static environment.  It should mimic the live environment and can act as back up hardware in the case of live failure.  This environment acts as an area to allow for “as live” testing, where the performance and behaviour of the live solutions can be replicated.  There should be relatively few changes to this environment, with software releases limited to “release candidate” level releases prior to going live. Whereas the pseudo-live environment should always mimic the live environment, to save on costs the development and test servers could be implemented on lower specification hardware.  Consideration can also be given to the use of a virtual server environment to further reduce hardware costs in the development and test environments, indeed this virtual approach can also be extended to pseudo-live and live assuming the underlying technology is in place. Although there is no requirement for the development and test server environments to be identical to live, the overriding architecture implemented should be the same as in live and an understanding must be gained of the performance differences to be expected across the different environments.

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Prepping the Raspberry Pi for Java Excellence (part 1)

    - by HecklerMark
    I've only recently been able to begin working seriously with my first Raspberry Pi, received months ago but hastily shelved in preparation for JavaOne. The Raspberry Pi and other diminutive computing platforms offer a glimpse of the potential of what is often referred to as the embedded space, the "Internet of Things" (IoT), or Machine to Machine (M2M) computing. I have a few different configurations I want to use for multiple Raspberry Pis, but for each of them, I'll need to perform the following common steps to prepare them for their various tasks: Load an OS onto an SD card Get the Pi connected to the network Load a JDK I've been very happy to see good friend and JFXtras teammate Gerrit Grunwald document how to do these things on his blog (link to article here - check it out!), but I ran into some issues configuring wi-fi that caused me some needless grief. Not knowing if any of the pitfalls were caused by my slightly-older version of the Pi and not being able to find anything specific online to help me get past it, I kept chipping away at it until I broke through. The purpose of this post is to (hopefully) help someone else recognize the same issues if/when they encounter them and work past them quickly. There is a great resource page here that covers several ways to get the OS on an SD card, but here is what I did (on a Mac): Plug SD card into reader on/in Mac Format it (FAT32) Unmount it (diskutil unmountDisk diskn, where n is the disk number representing the SD card) Transfer the disk image for Debian to the SD card (dd if=2012-08-08-wheezy-armel.img of=/dev/diskn bs=1m) Eject the card from the Mac (diskutil eject diskn) There are other ways, but this is fairly quick and painless, especially after you do it several times. Yes, I had to do that dance repeatedly (minus formatting) due to the wi-fi issues, as it kept killing the ability of the Pi to boot. You should be able to dramatically reduce the number of OS loads you do, though, if you do a few things with regard to your wi-fi. Firstly, I strongly recommend you purchase the Edimax EW-7811Un wi-fi adapter. This adapter/chipset has been proven with the Raspberry Pi, it's tiny, and it's cheap. Avoid unnecessary aggravation and buy this one! Secondly, visit this page for a script and instructions regarding how to configure your new wi-fi adapter with your Pi. Here is the rub, though: there is a missing step. At least for my combination of Pi version, OS version, and uncanny gift of timing and luck there was. :-) Here is the sequence of steps I used to make the magic happen: Plug your newly-minted SD card (with OS) into your Pi and connect a network cable (for internet connectivity) Boot your Pi. On the first boot, do the following things: Opt to have it use all space on the SD card (will require a reboot eventually) Disable overscan Set your timezone Enable the ssh server Update raspi-config Reboot your Pi. This will reconfigure the SD to use all space (see above). After you log in (UID: pi, password: raspberry), upgrade your OS. This was the missing step for me that put a merciful end to the repeated SD card re-imaging and made the wi-fi configuration trivial. To do so, just type sudo apt-get upgrade and give it several minutes to complete. Pour yourself a cup of coffee and congratulate yourself on the time you've just saved.  ;-) With the OS upgrade finished, now you can follow Mr. Engman's directions (to the letter, please see link above), download his script, and let it work its magic. One aside: I plugged the little power-sipping Edimax directly into the Pi and it worked perfectly. No powered hub needed, at least in my configuration. To recap, that OS upgrade (at least at this point, with this combination of OS/drivers/Pi version) is absolutely essential for a smooth experience. Miss that step, and you're in for hours of "fun". Save yourself! I'll pick up next time with more of the Java side of the RasPi configuration, but as they say, you have to cross the moat to get into the castle. Hopefully, this will help you do just that. Until next time! All the best, Mark 

    Read the article

  • How to convert from amateur web app developer to professional web apper?

    - by Nilesh
    This is more of a practical question on web app development and deployment process. Here is some background information. I use PHP for server side scripting, javascript for client side. I use Netbeans and notepad++. I user Firefox and firebug for debugging and testing. The process I use is very amateurish, I code something in netbeans, something in notepad++ and since there is nothing to compile, I just refresh the firefox browser and test it. This is convenient and faster compared to the Java development enviornment where you would have to atleast compile and deploy the jar files before you could run them. I have been thinking of putting a formal process in my development and find it hard putting it together. There are so many things to do before you can deploy your final web app. I keep hearing jslint, compression, unit testing (selenium), Ant, YUI compressor etc but I am now looking for some steps that I can take to make me more organized. For e.g I use netbeans but don't use any projects within it. I directly update the files. I don't use any source control but use my Iomega backup that saves each save into a different version and at the end of the day I backup the dev directory to my Amazon s3 account. For me development environment is just a DEV directory, TEST is my intermediate stage and PROD is the final directory that gets pushed out to the server. But all these directories are in the same apache home. I have few php scripts that just copies the needed files into the production directory. Thats about it for my development approach. I know I am missing the following - Regression testing (manual or automated ??) - automated testing (selenium ??) - automated deployment (ANT ??) - source control (svn ??) - quality control (jslint ??) Can someone explain what are the missing steps and how to go about filling those steps in order to have more professional approach. I am looking for tools with example tutorials in streamlining the whole development to deployment stage. For me just getting a hang of database, server side and client side development all in synchronization was itself a huge accomplishment. And now I feel there is lot missing before you can produce quality web application. For e.g I see lot of mention about using automated testing but how to put in use with respect to javascript and php. How to use ANT for the deployment etc. Is this all too much for a single or two person development team? Is there a way to automate all the above so that I just keep coding in netbeans and then run a batch file that is configured once and run it everytime to produce the code in the production directory? Lot of these information is scattered on the web and here, if someone can guide I would be happy to consolidate here. Thank you for your patience :)

    Read the article

  • Create Panoramic Photos with Windows Live Photo Gallery

    - by Matthew Guay
    Have you ever wanted to capture the view from a mountain or the full size of a building?  Here’s how you can stitch multiple shots together into the perfect panoramic picture for free with Windows Live Photo Gallery. Getting Started First, make sure you have Windows Live Photo Gallery installed (link below).  Live Photo Gallery is part of the Windows Live Essentials suite, you can select other programs to install along with it if you want. Make sure to uncheck setting your home page to MSN and setting your search provider as Bing if you don’t want them changed.   Now, make sure you have pictures that will work good for a panorama.  These need to be taken from the same spot, and the edges of the pictures need to overlap so the program can find where the pictures connect.  Here we have taken pictures inside a building with a cell phone camera. Make your Panorama Open Live Photo Gallery, and find the pictures you want to use in your panorama.  It will automatically index and display all of the photos in your Pictures folder or Library if you’re using Windows 7. If your pictures are saved elsewhere, add that folder to Photo Gallery.  Click File, Include a folder in the gallery, and select the correct folder at the prompt. Now select all of the pictures that you will use in your panorama.  You can easily do this by clicking the checkbox on each picture that appears when you hover over it.    Once all of the pictures are selected, click Make in the menu bar and select Create panoramic photo… Alternately, right-click on any of the pictures you’ve selected, and click Create panoramic photo… Live Photo Gallery will analyze your photos and compost them together to create a panorama.  The amount of time it takes will vary depending on the number of photos, size of the pictures, and computer speed. When it’s finished making the panorama, you’ll be prompted to enter a file name and save the picture. Your new panorama picture will open as soon as it’s saved.  Depending on your shots, the picture may have quite a bit of black space around the edges where each picture didn’t cover the exact same amount of area. To correct this, click Fix on the menu bar, and then select Crop Photo in the sidebar that opens. Select the center of the picture with the crop tool, and click Apply when you’ve got the selection you want. Live Photo Gallery automatically saves your picture changes, and you can revert back to the original picture if you wish. Now you’ve got a nice panoramic shot, trimmed and ready to print, share, and more. Conclusion Panoramic shots are great ways to capture your whole surroundings, whether it’s a sports stadium, mall, or a scenic mountain view.  They can also be a great way to capture more with low-resolution cameras. Link Download Windows Live Photo Gallery Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesLearning Windows 7: Manage Photos with Live Photo GalleryEasily Re-Size Photos in Windows Vista or XPInstall Windows Live Essentials In Windows 7Convert Photos to Flash for Your Website TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • Welcome 2011

    - by WeigeltRo
    Things that happened in 2010 MIX10 was absolutely fantastic. Read my report of MIX10 to see why.   The dotnet Cologne 2010, the community conference organized by the .NET user group Köln and my own group Bonn-to-Code.Net became an even bigger success than I dared to dream of.   There was a huge discrepancy between the efforts by Microsoft to support .NET user groups to organize public live streaming events of the PDC keynote (the dotnet Cologne team joined forces with netug  Niederrhein to organize the PDCologne) and the actual content of the keynote. The reaction of the audience at our event was “meh” and even worse I seriously doubt we’ll ever get that number of people to such an event (which on top of that suffered from technical difficulties beyond our control).   What definitely would have deserved the public live streaming event treatment was the Silverlight Firestarter (aka “Silverlight Damage Control”) event. And maybe we would have thought about organizing something if it weren’t for the “burned earth” left by the PDC keynote. Anyway, the stuff shown at the firestarter keynote was the topic of conversations among colleagues days later (“did you see that? oh yeah, that was seriously cool”). Things that I have learned/observed/noticed in 2010 In the long run, there’s a huge difference between “It works pretty well” and “it just works and I never have to think about it”. I had to get rid of my USB graphics adapter powering the third monitor (read about it in this blog post). Various small issues (desktop icons sometimes moving their positions after a reboot for no apparent reasons, at least one game I couldn’t get run at all, all three monitors sometimes simply refusing to wake up after standby) finally made me buy a PCIe 1x graphics adapter. If you’re interested: The combination of a NVIDIA GTX 460 and a GT 220 is running in “don’t make me think” mode for a couple of months now.   PowerPoint 2010 is a seriously cool piece of software. Not only the new hardware-accelerated effects, but also features like built-in background removal and picture processing (which in many cases are simply “good enough” and save a lot of time) or the smart guides.   Outlook 2010 crashes on me a lot. I haven’t been successful in reproducing these crashes, they just happen when every couple of days on different occasions (only thing in common: I clicked something in the main window – yeah, very helpful observation)   Visual Studio 2010 reminds me of Visual Studio 2005 before SP1, which is actually not a good thing to say about a piece of software. I think it’s telling that Microsoft’s message regarding the beta of SP1 has been different from earlier service pack betas (promising an upgrade path for a beta to the RTM sounds to me like “please, please use it NOW!”).   I have a love/hate relationship with ReSharper. I don’t want to develop without it, but at the same time I can’t fail to notice that ReSharper is taking a heavy toll in terms of performance and sometimes stability. Things I’m looking forward to in 2011 Obviously, the dotnet Cologne 2011. We already have been able to score some big name sponsors (Microsoft, Intel), but we’re still looking for more sponsors. And be assured that we’ll make sure that our partners get the most out of their contribution, regardless of how big or small.   MIX11, period.    Silverlight 5 is going to be great. The only thing I’m a bit nervous about is that I still haven’t read anything official on whether C# next version’s async/await will be in it. Leaving that out would be really stupid considering the end-of-2011 release of SL5 (moving the next release way into the future).

    Read the article

  • SQLUG Events - London/Edinburgh/Cardiff/Reading - Masterclass, NoSQL, TSQL Gotcha's, Replication, BI

    - by tonyrogerson
    We have acquired two additional tickets to attend the SQL Server Master Class with Paul Randal and Kimberly Tripp next Thurs (17th June), for a chance to win these coveted tickets email us ([email protected]) before 9pm this Sunday with the subject "MasterClass" - people previously entered need not worry - your still in with a chance. The winners will be announced Monday morning.As ever plenty going on physically, we've got dates for a stack of events in Manchester and Leeds, I'm looking at Birmingham if anybody has ideas? We are growing our online community with the Cuppa Corner section, to participate online remember to use the #sqlfaq twitter tag; for those wanting to get more involved in presenting and fancy trying it out we are always after people to do 1 - 5 minute SQL nuggets or Cuppa Corners (short presentations) at any of these User Group events - just email us [email protected] removing from this email list? Then just reply with remove please on the subject line.Kimberly Tripp and Paul Randal Master Class - Thurs, 17th June - LondonREGISTER NOW AND GET A SECOND REGISTRATION FREE*The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance.The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:>> Debunk many of the ingrained misconceptions around SQL Server's behaviour >> Show you disaster recovery techniques critical to preserving your company's life-blood - the data >> Explain how a common application design pattern can wreak havoc in the database >> Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Where: Radisson Edwardian Heathrow Hotel, LondonWhen: Thursday 17th June 2010*REGISTER TODAY AT www.regonline.co.uk/kimtrippsql on the registration form simply quote discount code: BOGOF for both yourself and your colleague and you will save 50% off each registration – that’s a 249 GBP saving! This offer is limited, book early to avoid disappointment.Wed, 23 JunREADINGEvening Meeting, More info and registerIntroduction to NoSQL (Not Only SQL) - Gavin Payne; T-SQL Gotcha's and how to avoid them - Ashwani Roy; Introduction to Recency Frequency - Tony Rogerson; Reporting Services - Tim LeungThu, 24 JunCARDIFFEvening Meeting, More info and registerAlex Whittles of Purple Frog Systems talks about Data warehouse design case studies, Other BI related session TBC Mon, 28 JunEDINBURGHEvening Meeting, More info and registerReplication (Components, Adminstration, Performance and Troubleshooting) - Neil Hambly Server Upgrades (Notes and Best practice from the field) - Satya Jayanty Wed, 14 JulLONDONEvening Meeting, More info and registerMeeting is being sponsored by DBSophic (http://www.dbsophic.com/download), database optimisation software. Physical Join Operators in SQL Server - Ami LevinWorkload Tuning - Ami LevinSQL Server and Disk IO (File Groups/Files, SSD's, Fusion-IO, In-RAM DB's, Fragmentation) - Tony RogersonComplex Event Processing - Allan MitchellMany thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com"

    Read the article

  • Beginner Geek: Scan Files for Viruses Before Using Them

    - by Mysticgeek
    To help avoid getting your computer infected by malicious software, it’s a good idea to scan files before executing them. Today we take a look at a couple of options that will let you scan files easily from your desktop. Scan File with Your Antivirus Software Most Antivirus software will put an option in the context menu so you can scan individual files. After downloading a file or email attachment, simply right-click the file and select the option to scan with your Antivirus software. If you want to scan more than one at a time, hold down the Ctrl key while you clicking each file you want to scan. Then right-click and select to scan with your Antivirus software. Here is our favorite Antivirus app, Microsoft Security Essentials scanning a couple of files. If a virus is found, your Antivirus app will delete it or put it in Quarantine so it cannot infect your system. Using VirusTotal Uploader To be very thorough and want a second opinion (actually 41), then you might want to check out the VirusTotal Uploader. This handy app will scan your files with 41 different Antivirus apps online. After installing VirusTotal Uploader, right-click the file, go to Send To, then VirusTotal. Alternately you can launch VirusTotal Uploader and Get and upload the file. It will send the file to VirusTotal.com and scan it with 41 different Antivirus apps and show you the results.   If you don’t want to install the Uploader, you can go to the VirusTotal site and upload a file from there to scan. We’ve noticed that occasionally there will be a false positive detected on files we know are clean. Sometimes the definition database of an Anti-malware app isn’t current, or an obscure Antivirus App will find something questionable. If that is the case, use your best judgment when viewing the results. Conclusion Most Antivirus apps today have real-time scanning and should be able to detect possible infections before you’re able to execute them. However, if they don’t or when in doubt, following these tips can save you a lot of headaches in the long run. If you use a lot of different flash drives throughout the day, check out our article on how to scan a thumb drive for viruses from the AutoPlay Dialog. Download Microsoft Security Essentials Download VirusTotal Uploader VirusTotal Website Similar Articles Productive Geek Tips Scan Files for Viruses Before You Download With Dr.WebMake Microsoft Security Essentials Scan Faster by Excluding Certain File TypesBeginner Geek: Delete User Accounts in Windows 7Scan Your Thumb Drive for Viruses from the AutoPlay DialogSecure Computing: Free Anti-Virus Protection With AVG Free Edition TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • Transform your Oracle Tutor Documents to Your Corporate Standard

    - by mary.keane
    You have all of your company's processes documented in Oracle Tutor, and now you want to get the HTML files to reflect your company's corporate look and feel. How are you going to do this without having an HTML guru to change every HTML page? The good news is you do not need to be an HTML expert to make minor changes to your documents. All Tutor HTML files are attached to a group of style sheets, so any changes you make to the style sheets will immediately be reflected in all of your HTML documents. If you want to give it a try, here's what you do (please note that these tips are applicable to release Oracle Tutor 12.2 and greater): Navigate to your Tutor HTML directory, and copy into a draft folder a representative group of HTML files (don't forget the flowchart image files that are associated with the procedures). You'll also need to copy the following files: tutor.css tutor_notabs.css tutor_scripts.js tutor_tabs.css flow_icon.gif Here's the default look to the Oracle Tutor desk manual. Let's say I want to use my company's corporate style in the HTML documents. At Oracle, we use Oracle Red (FF0000), Oracle Black (000000), and Oracle Gray (666666). So I want to incorporate those colors into the Tutor HTML files. I open tutor.css from the draft folder in a text editor. My preference is to use Notepad, but there are others. Make sure, however, that it is a text editor, and not a word processing program. I want to change the headings to Oracle Red. The desk manual title is listed as the DMPAGETITLE, so I find that in tutor.css. The style names in the style sheets are descriptive, but sometimes you may have to experiment to find the right style (this is why you're working in a draft folder). I change the color attribute to FF00000, and then I save the document. Now I look at one of the desk manuals in my draft folder. I've successfully changed the title of the desk manual, so, now that I have more confidence that I can do this, I start changing other styles. I need to make changes in the tutor_tabs.css file as well, so I open that document. Then I look at one of the procedures. Oops! All that red is distracting, and the users may not be able to follow their procedures. So I go back to the corporate style guide, and I find some shades of gray that have been approved. So I use that, and it is now more readable. It's good enough for a first draft, and I would show it to my colleagues at this point to get their input. On my next blog, I'll discuss how to change the flowchart colors to match your corporate look and feel. Have you used the cascading styles sheets to change the look of your Tutor documents? If so, let us know what you've done in your post. Mary R. Keane Senior Development Manager, Oracle Tutor & UPK Content

    Read the article

  • Silverlight Cream for November 13, 2011 -- #1166

    - by Dave Campbell
    In this Issue: Pontus Wittenmark, Jeff Blankenburg(-2-), Colin Eberhardt, Charles Petzold, Dhananjay Kumar, Igor, Beth Massi, Kunal Chowdhury(-2-), Shawn Wildermuth, XAMLNinja, and Peter Kuhn(-2-). Above the Fold: Silverlight: "Silverlight Page Navigation Framework - Learn about UriMapper" Kunal Chowdhury WP7: "31 Days of Mango" Jeff Blankenburg WinRT/Metro/W8: "An Introduction to Semantic Zoom in Windows 8 Metro" Colin Eberhardt LightSwitch: "Common Validation Rules in LightSwitch Business Applications" Beth Massi Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: 10 tips about porting Silverlight apps to WinRT/Metro style apps (Part 1) Pontus Wittenmark spent some time porting his Silverlight game to WinRT and says it was easier than expected. He has posted 10 tips for porting... and promises more 31 Days of Mango Looks like Jeff Blankenburg started another 31 days series... this one on Mango dev... and looks like I'm late to the party, but that's ok, gives me more stuff to blog about... this time you can get the posts by email, and he has a hashtag for discussion too 31 Days of Mango | Day #1: The New Windows Phone Emulator Tools Day 1 of Jeff Blankenburg's journey is this post on what's new in the emulator tools. An Introduction to Semantic Zoom in Windows 8 Metro This is Colin Eberhardt's latest ... getting familiar with semantic zoom oin Metro by creating a WP7-stylke jumplist experience.... check out the video on his blogpost for a better idea of what he's up to .NET Streams and Windows 8 IStreams In his first real post on his new series writing an EPUB viewer for W8, Charles Petzold described using IInputStream to get the contents of a disk file... and source for the project in progress Video on How to work with Page Navigation and Back Button in Windows Phone 7 Dhananjay Kumar has a video tutorial up on Page Navigation and Back Button usage in WP7 Screen capture to media library instead of isolated storage Igor discusses a class that lets you save screen captures for use in your application and also saving them to the media library on the phone Common Validation Rules in LightSwitch Business Applications Beth Massi's latest is this LightSwitch post on Validation rules... showing how to define declarative rules and also write custom validation code. Silverlight Page Navigation Framework - Learn about UriMapper Kunal Chowdhury continues his Page Navigation discussion with this post on the UriMapper, and how to hide the actual URL of the page you're navigating to How to use PlaySoundAction Behavior in WP7 Application? Kunal Chowdhury also has this post up on using the PlaySoundAction Behavior in WP7 ... nice tutorial on using Blend to get the job done What Win8 Should Learn from Windows Phone After spending time with Windows 8, Shawn Wildermuth has this post up about features from WP7 that should be brought over to Windows 8, and finishes with features that WP8 (?) could learn from Win8 too WP7Contrib – FindaPad and the fastest list in the west XAMLNinja discusses the WP7 App FindaPad which spawned the creation of WP7Contrib and uses the app to describe some nuances that may not be readily obvious. Windows Phone 7: The kind of bug you don't want to discover Peter Kuhn discusses a problem he came across while programming WP7, interestingly enough, only in the emulator, and has to do with a Uint64 cast. He does offer a workaround. Announcing: Your Last About Dialog (YLAD) Peter Kuhn also has this post up that's a take-off on a post by Jeff Wilcox about a generic About Dialog. Peter has some great additions.. and he's right... it may be your last About Dialog... get it via NuGet, too! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Why some links appear in a new tab, others in a new window?

    - by SAMIR BHOGAYTA
    Originally, it was made to resolve problems on IE8 32 bits when you use a 32 bits OS. I changed "%ProgramFiles(x86)%" var, and now my issue is resolved for IE8 32 bits on my Windows 7 64 bits. Please try it, and tell me if everything work for you. For Win 7 64 bits : 1 - Create a new notepad document and paste this text : @echo off echo. echo IEREREG Version 1.07 for IE8 27.03.2009 echo by Kai Schaetzl http://iefaq.info echo installs and registers (if suitable) all DLLs known to be used by IE8. echo should only take a few seconds, but please be patient echo. REM ****************************** echo registering IE files REM IE files (= part of setup) regsvr32 /s /i browseui.dll REM regsvr32 /s /i browseui.dll,NI (unnecessary) regsvr32 /s corpol.dll regsvr32 /s dxtmsft.dll regsvr32 /s dxtrans.dll REM simple HTML Mail API regsvr32 /s "%ProgramFiles(x86)%\internet explorer\hmmapi.dll" REM group policy snap-in regsvr32 /s ieaksie.dll REM smart screen regsvr32 /s ieapfltr.dll REM ieak branding regsvr32 /s iedkcs32.dll REM dev tools regsvr32 /s "%ProgramFiles(x86)%\internet explorer\iedvtool.dll" regsvr32 /s iepeers.dll REM Symptom: IE8 closes immediately on launch, missing from IE7 regsvr32 /s "%ProgramFiles(x86)%\internet explorer\ieproxy.dll" REM no install point anymore REM regsvr32 /s /i iesetup.dll REM no reg point anymore REM regsvr32 /s imgutil.dll regsvr32 /s /i /n inetcpl.cpl REM no install point anymore REM regsvr32 /s /i inseng.dll regsvr32 /s jscript.dll REM license manager regsvr32 /s licmgr10.dll REM regsvr32 /s msapsspc.dll REM regsvr32 /s mshta.exe REM VS debugger regsvr32 /s msdbg2.dll REM no install point anymore REM regsvr32 /s /i mshtml.dll regsvr32 /s mshtmled.dll regsvr32 /s msident.dll REM no reg point anymore REM regsvr32 /s msrating.dll REM multimedia timer regsvr32 /s mstime.dll REM no install point anymore REM regsvr32 /s /i occache.dll REM process debug manager regsvr32 /s "%ProgramFiles(x86)%\internet explorer\pdm.dll" REM no reg point anymore REM regsvr32 /s pngfilt.dll REM regsvr32 /s /i setupwbv.dll (not there anymore!) regsvr32 /s tdc.ocx regsvr32 /s /i urlmon.dll REM regsvr32 /s /i urlmon.dll,NI,HKLM regsvr32 /s vbscript.dll REM VML renderer regsvr32 /s "%CommonProgramFiles%\microsoft shared\vgx\vgx.dll" REM no install point anymore REM regsvr32 /s /i webcheck.dll regsvr32 /s /i /n wininet.dll REM ****************************** echo registering system files REM additional system dlls known to be used by IE REM added 11.05.2006 Symptom: Add-Ons-Manager menu entry is present but nothing happens regsvr32 /s extmgr.dll REM added 12.05.2006 Symptom: Javascript links don't work (Robin Walker) .NET hub file regsvr32 /s mscoree.dll REM added 23.03.2009 Symptom: Find on this page is blank regsvr32 /s oleacc.dll REM added 24.03.2009 Symptom: Printing problems, open in new window regsvr32 /s ole32.dll REM mscorier.dll REM mscories.dll REM Symptom: open in new tab/window not working regsvr32 /s actxprxy.dll regsvr32 /s asctrls.ocx regsvr32 /s cdfview.dll regsvr32 /s comcat.dll regsvr32 /s /i /n comctl32.dll regsvr32 /s cryptdlg.dll regsvr32 /s /i /n digest.dll regsvr32 /s dispex.dll regsvr32 /s hlink.dll regsvr32 /s mlang.dll regsvr32 /s mobsync.dll regsvr32 /s /i msieftp.dll REM regsvr32 /s msnsspc.dll #no entry point regsvr32 /s msr2c.dll regsvr32 /s msxml.dll regsvr32 /s oleaut32.dll REM regsvr32 /s plugin.ocx #no entry point regsvr32 /s proctexe.ocx REM plus DllRegisterServerEx ExA ExW ... ? regsvr32 /s /i scrobj.dll REM shdocvw.dll hasn't been updated for IE7 and IE8, it still registers itself for the Windows Internet Controls regsvr32 /s /i shdocvw.dll regsvr32 /s sendmail.dll REM ****************************** REM PKI/crypto functionality REM initpki can take very long to run and is rarely a problem REM if there are problems with crypto, SSL, certificates REM remove the three following REMs from the lines REM echo We are almost done except one crypto file REM echo but this will take very long, be patient! REM regsvr32 /s /i:A initpki.dll REM ****************************** REM tabbed browser, do at the end, why originally with /n ? regsvr32 /s /i ieframe.dll REM ****************************** echo correcting bugs in the registry REM do some corrective work REM Symptom: new tabs page cannot display content because it cannot access the controls (added 27. 3.2009) REM This is a result of a bug in shdocvw.dll (see above), probably only on Windows XP reg add "HKCR\TypeLib\{EAB22AC0-30C1-11CF-A7EB-0000C05BAE0B}\1.1\0\win32" /ve /t REG_SZ /d %systemroot%\system32\ieframe.dll /f REM ****************************** echo all tasks have been finished echo. pause 2 - Close all your IE windows and processes. 3 - Save your document on your Desktop by example, with the .bat extension. Right-click on it, and select "Run as administrator". 4 - Test if this tip resolved your issue by openning IE. If you use a Windows 32 bits version, please replace %ProgramFiles(x86)% by %ProgramFiles% in your .bat file.

    Read the article

  • Unleash the Power of Cryptography on SPARC T4

    - by B.Koch
    by Rob Ludeman Oracle’s SPARC T4 systems are architected to deliver enhanced value for customer via the inclusion of many integrated features.  One of the best examples of this approach is demonstrated in the on-chip cryptographic support that delivers wire speed encryption capabilities without any impact to application performance.  The Evolution of SPARC Encryption SPARC T-Series systems have a long history of providing this capability, dating back to the release of the first T2000 systems that featured support for on-chip RSA encryption directly in the UltraSPARC T1 processor.  Successive generations have built on this approach by support for additional encryption ciphers that are tightly coupled with the Oracle Solaris 10 and Solaris 11 encryption framework.  While earlier versions of this technology were implemented using co-processors, the SPARC T4 was redesigned with new crypto instructions to eliminate some of the performance overhead associated with the former approach, resulting in much higher performance for encrypted workloads. The Superiority of the SPARC T4 Approach to Crypto As companies continue to engage in more and more e-commerce, the need to provide greater degrees of security for these transactions is more critical than ever before.  Traditional methods of securing data in transit by applications have a number of drawbacks that are addressed by the SPARC T4 cryptographic approach. 1. Performance degradation – cryptography is highly compute intensive and therefore, there is a significant cost when using other architectures without embedded crypto functionality.  This performance penalty impacts the entire system, slowing down performance of web servers (SSL), for example, and potentially bogging down the speed of other business applications.  The SPARC T4 processor enables customers to deliver high levels of security to internal and external customers while not incurring an impact to overall SLAs in their IT environment. 2. Added cost – one of the methods to avoid performance degradation is the addition of add-in cryptographic accelerator cards or external offload engines in other systems.  While these solutions provide a brute force mechanism to avoid the problem of slower system performance, it usually comes at an added cost.  Customers looking to encrypt datacenter traffic without the overhead and expenditure of extra hardware can rely on SPARC T4 systems to deliver the performance necessary without the need to purchase other hardware or add-on cards. 3. Higher complexity – the addition of cryptographic cards or leveraging load balancers to perform encryption tasks results in added complexity from a management standpoint.  With SPARC T4, encryption keys and the framework built into Solaris 10 and 11 means that administrators generally don’t need to spend extra cycles determining how to perform cryptographic functions.  In fact, many of the instructions are built-in and require no user intervention to be utilized.  For example, For OpenSSL on Solaris 11, SPARC T4 crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "t4 engine."  For a deeper technical dive into the new instructions included in SPARC T4, consult Dan Anderson’s blog. Conclusion In summary, SPARC T4 systems offer customers much more value for applications than just increased performance. The integration of key virtualization technologies, embedded encryption, and a true Enterprise Operating System, Oracle Solaris, provides direct business benefits that supersedes the commodity approach to data center computing.   SPARC T4 removes the roadblocks to secure computing by offering integrated crypto accelerators that can save IT organizations in operating cost while delivering higher levels of performance and meeting objectives around compliance. For more on the SPARC T4 family of products, go to here.

    Read the article

  • Remove Ubuntu or XP from the Windows 7 Boot Menu

    - by Trevor Bekolay
    If you’ve ever used a dual-boot system and then removed one of the operating systems, it can still show up in Windows 7’s boot menu. We’ll show you how to get rid of old entries and speed up the boot process. To edit the boot menu, we will use a program called bcdedit that’s included with Windows 7. There are some third-party graphical applications that will edit the menu, but we prefer to use built-in applications when we can. First, we need to open a command prompt with Administrator privileges. Open the start menu and type cmd into the search box. Right click on the cmd program that shows up, and select Run as administrator. Alternatively, if you’ve disabled the search box, you can find the command prompt in All Programs > Accessories. In the command prompt, type in bcdedit and press enter. A list of the boot menu entries will appear. Find the entry that you would like to delete – in our case, this is the last one, with the description of “Ubuntu”. What we need is the long sequence of characters marked as the identifier. Rather than type it out, we will copy it to be pasted later. Right-click somewhere in the command prompt window and select Mark. By clicking the left mouse button and dragging over the appropriate text, select the identifier for the entry you want to delete, including the left and right curly braces on either end. Press the Enter button. This will copy the text to the clipboard. In the command prompt, type in: bcdedit /delete and then right-click somewhere in the command prompt window and select Paste. Press Enter to input the now completed command. The boot menu entry will now be deleted. Type in bcdedit again to confirm that the offending entry is now gone from the list. If you reboot your machine now, you will notice that the boot menu does not even come up, because there is only one entry in the list (unless you had more than two entries to begin with). You’ve shaved a few seconds off of the boot process! Not to mention the added effort of pressing the enter button. There’s a lot more that you can do with bcdedit, like change the description of boot menu entries, create new entries, and much more. For a list of what you can do with bcdedit, type the following into the Command Window. bcdedit /help While there are third-party GUI solutions for accomplishing the same thing, using this method will save you time by not having to go through the extra steps of installing an extra program. Similar Articles Productive Geek Tips Reinstall Ubuntu Grub Bootloader After Windows Wipes it OutClean Up Ubuntu Grub Boot Menu After UpgradesHow To Switch to Console Mode for Ubuntu VMware GuestSet Windows as Default OS when Dual Booting UbuntuChange the GRUB Menu Timeout on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3

    Read the article

  • i don't receive mail notification...Nagios Core 4

    - by alessio
    I have a problem with automatically mail notification in Nagios Core 4 installed on ubuntu 12 .04 lts server... i have tried to send mail with nagios user and root user with the command: echo "test" | mail -s "test mail" [email protected] and i received mail correctly... but i don't receive any automatically mail notification... i don't know how can i do to resolve this issue! :( these are my configuration files (commands.cfg, contacts.cfg, nagios.log, mail.log): commands.cfg (the path /usr/bin/mail is the right path): # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } # 'process-host-perfdata' command definition define command{ command_name process-host-perfdata command_line /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$\n" >> /usr/local/nagios/var/host-perfdata.out } # 'process-service-perfdata' command definition define command{ command_name process-service-perfdata command_line /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$\n" >> /usr/local/nagios/var/service-perfdata.out } contacts.cfg: define contact{ contact_name supporto alias Supporto Clienti DEA service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r host_notification_options d,r service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email [email protected] } define contactgroup{ contactgroup_name admins alias Nagios Administrators members supporto } nagios.log: [1401871412] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401871953] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 541 seconds ago [1401872133] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago [1401872321] SERVICE ALERT: posta;Swap Usage;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872322] SERVICE ALERT: fileserver;Current Users;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872420] SERVICE ALERT: archivio;Disk Space;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872492] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401872492] SERVICE ALERT: posta;Swap Usage;OK;SOFT;2;SWAP OK: 100% free (1984 MB out of 1984 MB) [1401872590] SERVICE ALERT: archivio;Disk Space;OK;SOFT;2;DISK OK [1401872931] Auto-save of retention data completed successfully. [1401873333] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 402 seconds ago [1401873513] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago mail.log (i think that the problem is here but i don't know how to resolve it): Jun 4 10:00:01 backups sm-msp-queue[6109]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:01:01 backups sm-msp-queue[6109]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:20:01 backups sm-msp-queue[7247]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:21:01 backups sm-msp-queue[7247]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:40:01 backups sm-msp-queue[8327]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:41:01 backups sm-msp-queue[8327]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:00:01 backups sm-msp-queue[9549]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:01:01 backups sm-msp-queue[9549]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:20:01 backups sm-msp-queue[10678]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:21:01 backups sm-msp-queue[10678]: unable to qualify my own domain name (backups) -- using short name i'm at the last step and i want to finish this Nagios Core! :) Any help be appreciate!:) host definition (this host have the disk almost full and it is in hard state but non notification) : define host{ use generic-host ; Name of host template to use host_name posta alias Server Posta ESA address 10.10.2.102 parents xen1, xen2 icon_image redhat.png statusmap_image redhat.gd2 } service definition: define service{ use generic-service host_name xen1, maestro, xen2, posta, nas002, serv2, esasrvmi02, esaubuntumi service_description Disk Space check_command ssh_all_disks!10%!5% } Notification is allowed for the contact definition you gave, but is it also allowed at the the service level ? sorry but i don't understand this thing! :(

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • Silverlight Cream for December 12, 2010 - 2 -- #1009

    - by Dave Campbell
    In this Issue: Michael Crump, Jesse Liberty, Shawn Wildermuth, Domagoj Pavlešic, Peter Kuhn, James Ashley, Sara Summers, Morten Nielsen, Peter Torr, and Tau Sick. Above the Fold: Silverlight: "Silverlight 4 – Coded UI Framework Video Tutorial" Michael Crump WP7: "Windows Phone From Scratch #12–Custom Behaviors (Part I)" Jesse Liberty From SilverlightCream.com: Silverlight 4 – Coded UI Framework Video Tutorial Michael Crump posted a video tutorial today on the Coded UI Test Framework that we got with the VS2010 Feature Pack 2. Wanna create automated tests? ... check out Michael's video and save yourself some time. Windows Phone From Scratch #12–Custom Behaviors (Part I) Jesse Liberty posted his Windows Phone from Scratch number 12 today... and it's on Custom Behaviors... cool stuff... need to read this and get your head around it... this is part 1, jump on it before he drops part 2 on us! The Next Application Platform? All of them... Shawn Wildermuth has a thought-provoking post up ... check it out and see if you're ready to join him on the adventure of building for all the platforms... Windows Phone 7 Accelerometer Test App Domagoj Pavlešic has a test app up for the accelerometer on the WP7 ... if you need to use it, and are having problems, a good example always helps me. Protocol of developing an animation texture tool Peter Kuhn found a need for a tool to creat some animations for an WP7 XNA game... so he challenged himself to write it, and detailed out all his steps as he went. Re-examining WP7 Launchers and Choosers James Ashley's most recent post is on the Pivot Control ... check this out... add a working Horizontally oriented slider to a pivot... plus some external links to help out New Prototyping Sketch Sheets for WP7 This is one of those posts that I had to go to SilverlightCream and make sure I hadn't hit it yet... pretty cool prototype sheets for WP7 by Sara Summers ... we've seen others, they're all good. Simulating GPS on Windows Phone 7 Morten Nielsen helps you get around the fact that you're not going to be able to use the emulator for testing your GPS app ... at least not without some assistance... and that doesn't mean hauling your dev system around your neighborhood, either. How to correctly handle application deactivation and reactivation We've seen posts on Tombstoning, but probably not from Silverlight team members... check this one out from Peter Torr ... great even sequence information and all the info on how to correctly handle it, plus external links to the documentation... you knew there was documentation, right? :) Localizing a Windows Phone 7 Application Tau Sick has a post up discussing Localization and your WP7 apps... coming from soneone with an app in the marketplace in 3 languages, it's a pretty good bet he's got it figured out! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • T-SQL Tuesday #005 : SSRS Parameters and MDX Data Sets

    - by blakmk
    Well it this weeks  T-SQL Tuesday #005  topic seems quite fitting. Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.   @Blakmk

    Read the article

  • InfoPath 2010 Form Design and Web Part Deployment

    - by JKenderdine
    In January I had the pleasure to speak at SharePoint Saturday Virginia Beach.  I presented a session on InfoPath 2010 forms design which included some of the basics of Forms Design, description of some of the new options with InfoPath 2010 and SharePoint 2010, and other integration possibilities.  Included below is the information presented as well as the solution to create the demo: First thing you need to understand is what the difference is between an InfoPath List form and a Form Library Form?  SharePoint List Forms:  Store data directly in a SharePoint list.  Each control (e.g. text box) in the form is bound to a column in the list. SharePoint list forms are directly connected to the list, which means that you don’t have to worry about setting up the publish and submit locations. You also do not have the option for back-end code. Form Library Forms:  Store data in XML files in a SharePoint form library.  This means they are more flexible and you can do more with them.  For example, they can be configured to save drafts and submit to different locations. However, they are more complex to work with and require more decisions to be made during configuration.  You do have the option of back-end code with these type of forms. Next steps: You need to create your File Architecture Plan.  Plan the location for the saved template – both Test and Production (This is pretty much a given, but just in case - Always make sure to have a test environment) Plan for the location of the published template Then you need to document your Form Template Design Plan.  Some questions to ask to gather your requirements: What will the form be designed to do? Will it gather user information? Will it display data from a data source? Do we need to show different views to different users? What do we base this on? How will it be implemented for the users? Browser or Client based form Site collection content type – Published through Central Admin Form Library – Published directly to form library So what are the requirements for this template?  Business Card Request Form Template Design Plan Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Published directly to form library The form was published through Central Administration and incorporated into the site as a content type. Utilizing the new InfoPath Web part, the form is integrated into the page and the users can complete the form directly from within that page. For now, if you are interested in the final form XSN, contact me using the Contact link above.   I will post soon with the details on how the form was created and how it integrated the requirements detailed above.

    Read the article

  • JD Edwards Apps in a Box - Update

    - by Hartmut Wiese
    Summary and clarification JD Edwards Apps in a box is a Partner offering to the customer. We as Oracle have a huge interest in getting a successful offering to the market and we help the Partner building their offering. We provide components like JD Edwards EnterpriseOne and the Hardware. The Business Partner adds the installation services and position this as a solution to the market for a single price. As you know JD Edwards EnterpriseOne can run on multiple hardware platforms. Linux/X-86 version As you all know we do have JD Edwards VM Templates available from Oracle for the X-86 architecture. Each Partner should or is already able to install JD Edwards EnterpriseOne using these images from our software delivery cloud. We built a master bill of material for a X3-2 Hardware configuration now. It has been uploaded on the Community Workspace now. This is a SUGGESTION and limited to 50 Users MAX. However I strongly recommend you to do a sizing as usual and verify the configuration for each opportunity individually. T4-1/X3-2 version Oracle is not providing similar images for the T4-1 SPARC / SOLARIS architecture. There is an Optimized Solution Team inside Oracle who has created an Optimized Solution for JD Edwards some time ago. They created a whitepaper which is still available to download. This whitepaper was used as a starting point however we decided to build a new version of it using the latest Software and Hardware available. This has now been finalized and we are happy to provide this to our partners. This image is more a service we provide for each partner which they can reuse and extend based on their individual offerings. It is not an official supported Oracle Product and cannot be used to deploy to customers immediately. You cannot resell “JDE in a box”. You can use these images to save time while building your own Go-to-Market offering. You might want to add functionality like Mobility. It is also not complete as also the Deployment Server needs to be configured individually at the customer site. We will create some documentation about: what this images contains (and what not)? what final installation activities needs to be provided by each VAD/Partner in this process?  I will send an email to the community once we are ready to share it. You find these assets than in the Community Workspace. The Business Model with Oracle Hardware For those who have not done any Hardware business with Oracle yet: Usually a HW reseller orders the hardware through a Value Add Distributors (VAD) and not from Oracle directly. Each Partner needs to have Hardware Resell rights to do so. The VAD is assembling the boxes according to the needs of each customer. It is easily possible for them to prepare the boxes with the images we/you provide. However the final configuration is something a reseller/implementer needs to do at the customer site. This process is not the same in the EMEA region. Sometimes a VAD are taking the order but they do not see the Hardware at all. In those cases a VAD cannot provide any help with the pre-loading of any images and the reseller/implementer needs to do that. In some countries we do not have VADs at all.

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >