Search Results

Search found 873 results on 35 pages for 'modification'.

Page 23/35 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Xamp on ubuntu serves php source for root url only

    - by mazaryk
    Hey, Okay, so installed xamp on my ubuntu machine, started it up and everything worked. Apache ran my php app just fine (including requests to the root url "/"). However, after the first reboot since installing, when I request "http://localhost/" apache serves up the index php page as a phtml source file. All other urls (like "http://localhost/login") work as expected. Backgound: The only modification I made to xamp was to setup a vhost for my app. The app uses an .htaccess file where I define some rewrite rules (the app is an MVC framework and all urls are rewritten to a single entry point php file). I'm using Xamp because I need php = 5.3.0. I know apache will serve up the source of a php file when it doesn't know to process php files. But the config does indeed have "AddType application/x-httpd-php .php" and as I said, the app works for all urls except the root "/" (and only since I've rebooted). The .htaccess file does contain a DirectoryIndex directive. xamp 1.3.7a Ubuntu 9.10 Any ideas?

    Read the article

  • Same script, different behavior [migrated]

    - by Antoine_935
    I just stumbled upon an interesting bug... Still trying to figure out what is exactly happening. Maybe you can help. First, the context. I'm currently building yet another man to html converter (for some reasons I won't motivate here, but I need it). So, have a look at the screenshot below (see the link), more precisely at the outlined spots. See? On the upper shell, I have &lt ; and &gt ;, that is, escaped html. While on the shell below I have < and directly. But as you can see (or do I seriously need looking glass ?), the command man 2 semget | webmanneris the same on both sides, as is the which webmanner. The two are executed roughly at the same moment, with no modification made to the script between. [Oops, cannot post pictures just yet... Here comes the link] http://aspyct.org/media/webmanner-bug.png But the shell below is older (open about 1 hour ago). Newer shells all print out &lt ;. So my first guess was that it somehow had a cached reference to the old inode of the file, or old blocks or whatever. So I modified parts of the script, at the start and then at the end, to print different messages. And, surprise, the message shown up on both terminals. But still, same difference between &lt ; and <. I'm confused... How to explain that behavior? I'm working on a OSX 10.8 (Mountain Lion) EDIT: OK, there is one big difference: the shell below uses ruby 1.9.3, while above is 1.8.7. Is there any known difference in string handling between the two versions ?

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • Looking for a smart sync tool that exactly copies the structure of the source to that of the destination

    - by ????????
    I am a Windows 7 user and looking for a sync tool that exactly copies the structure of the source to that of the destination. The scenario is as follows: I have a laptop in which I save my daily projects. I always bring this laptop wherever I go to work, either at my school or at my house. At school I have a room in which I put an external hard drive to backup my laptop data. Upon arriving at school I always sync the external data with my laptop. Before leaving the school, I also do the synchronization if I make some modification in my laptop data. I do the same thing when I arrive and leave my house. Every month I reformat my laptop because of some trial softwares. I am using SyncToy 2.1 to sync the external drives with my laptop. The problem is as follows. Before formatting my laptop, I sync all external drives with my laptop. For example, My laptop structure: c:\data\file1.ext c:\data\folder1\file2.ext Both external drive structure: d:\data\file1.ext d:\data\folder1\file2.ext After reformatting my laptop, I move the file1.ext to folder1. So my laptop structure becomes, c:\data\folder1\file1.ext c:\data\folder1\file2.ext If I sync external drives with my laptop, I get d:\data\file1.ext d:\data\folder1\file1.ext d:\data\folder1\file2.ext I want the tool also remove d:\data\file1.ext. How to do this?

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • Does cloud computing offer this? [closed]

    - by TheBlackBenzKid
    I have some newb questions I want answering please about cloud hosting - we are currently looking at Rackspace and getting a windows box. This is the situation: We have 15 computers in our office. We have 3 printers, some wifi and some network plugged. We have a standard router and the office share things via dropbox. The computers are not on Windows SBS or something similar. We want a cloud hosting solution that will offer User can login on any machine in the office and see the machine software User can login on any machine in the office and open Outlook and their emails and signature will be on exchange automatically A shared company folder on the network All printers automatically installed on the network Users can login remotely to access emails via the web At the moment we have a network company saying we need Xeon server in house with backup and psu and Windows SBS with license for each machine and also we need cabinets and cabling setup and also load balancers and modification of our DNS for emails. My question is this. Can cloud offer this? Can we have a server in the cloud that does this? Is it possible I mean the computers would be wireless connected to this cloud and you turn the machine on and its hosted?

    Read the article

  • Changing the name of a binary packaged application and its evoking command

    - by jerkstore
    I have taken the source code of a large project, App A, and made many modifications to it to produce my version, App B. Both App A and App B compile cleanly on Debian and Red Hat and now I would like to build binary packages for both platforms. The last modification I need to make is ensuring App B can be installed alongside App A without any interference. I should be able to evoke both application-a and application-b in the terminal and have both be listed as separate software in whatever desktop environment is present. The projects have a debian/ folder (containing rules, control, etc.) and an rpm/ folder containing a SPEC file. Currently, building and installing the .rpm and .deb packages works except that App B is recognized as App A and therefore does not meet the aforementioned requirements. ldd shows the programs have the same exact dependencies and I am not able to pursue static linking of libraries. What modifications do I need to make to my project to achieve the desired outcome? Please be specific as I do not have much experience with the packaging process.

    Read the article

  • Start Learning Ruby with IronRuby – Setting up the Environment

    - by kazimanzurrashid
    Recently I have decided to learn Ruby and for last few days I am playing with IronRuby. Learning a new thing is always been a fun and when it comes to adorable language like Ruby it becomes more entertaining. Like any other language, first we have to create the development environment. In order to run IronRuby we have to download the binaries form the IronRuby CodePlex project. IronRuby supports both .NET 2.0 and .NET 4, but .NET 4 is the recommended version, you can download either the installation or the zip file. If you download the zip file make sure you added the bin directory in the environment path variable. Once you are done, open up the command prompt and type : ir –v It should print message like: IronRuby 1.0.0.0 on .NET 4.0.30319.1 The ir is 32bit version of IronRuby, if you want to use 64bit you can try ir64. Next, we have to find a editor where we can write our Ruby code as there is currently no integration story of IronRuby with Visual Studio like its twin Iron Python. Among the free IDEs only SharpDevelop has the IronRuby support but it does not have auto complete or debugging built into it, only thing that it supports is the syntax highlighting, so using a text editor which has the same features is nothing different comparing to it. To play with the IronRuby we will be using Notepad++, which can be downloaded from its sourceforge download page. The Notepad++ does have a nice syntax highlighting support : I am using the Vibrant Ink with some little modification. The next thing we have to do is configure the Notepad++ that we can run the Ruby script in IronRuby inside the Notepad++. Lets create a batch(.bat) file in the IronRuby bin directory, which will have the following content: @echo off  cls call ir %1 pause This will make sure that the console will be paused once we run the script. Now click Run->Run in the Notepad++, it will bring up the run dialog and put the following command in the textbox: riir.bat "$(FULL_CURRENT_PATH)" Click the save which will bring another dialog. Type Iron Ruby and assign the shortcut to ctrl + f5 (Same as Visual Studio Start without Debugging) and click ok. Once you are done you will find the IronRuby in the Run menu. Now press ctrl + f5, we will find the ruby script running in the IronRuby. Now there are one last thing that we would like to add which is poor man’s context sensitive help. First, download the ruby language help file from the Ruby Installer site and extract into a directory. Next we will have to install the Language Help Plug-in of Notepad++, click Plugins->Plugin Manger –>Show Plugin Manager and scroll down until you find the plug-in the list, now check the plug-in and click install. Once it is installed it will prompt you to restart the Notepad++, click yes. When the Notepad++ restarts, click the Plugins –> Language Help –> Options –> add and enter the following details and click ok: The chm file location can be different depending upon where you extracted it. Now when you put your in any of ruby keyword and press ctrl + f1 it will take you to the help topic of that keyword. For example, when my caret is in the each of the following code and I press ctrl + f1, it will take me to the each api doc of Array. def loop_demo (1..10).each{ |n| puts n} end loop_demo That’s it for today. Happy Ruby coding.

    Read the article

  • MongoDB usage best practices

    - by andresv
    The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code. This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned! C# Official driver notes The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project. C# driver quickstart guide: http://www.mongodb.org/display/DOCS/CSharp+Driver+Quickstart Reference links C# Language Center: http://www.mongodb.org/display/DOCS/CSharp+Language+Center MongoDB Server Documentation: http://www.mongodb.org/display/DOCS/Home MongoDB Server Downloads: http://www.mongodb.org/downloads MongoDB client drivers download: http://www.mongodb.org/display/DOCS/Drivers MongoDB Community content: http://www.mongodb.org/display/DOCS/CSharp+Community+Projects Tutorials Tutorial MongoDB con ASP.NET MVC - Ejemplo Práctico (Spanish):http://geeks.ms/blogs/gperez/archive/2011/12/02/tutorial-mongodb-con-asp-net-mvc-ejemplo-pr-225-ctico.aspx MongoDB and C#:http://www.codeproject.com/Articles/87757/MongoDB-and-C C# driver LINQ tutorial:http://www.mongodb.org/display/DOCS/CSharp+Driver+LINQ+Tutorial C# driver reference: http://www.mongodb.org/display/DOCS/CSharp+Driver+Tutorial Safe Mode Connection The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates. While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical. When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see: http://www.mongodb.org/display/DOCS/getLastError+Command If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller. As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see: http://www.mongodb.org/display/DOCS/CSharp+getLastError+and+SafeMode The safe mode configuration can be specified at different levels: Connection string: mongodb://hostname/?safe=true Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method Operation: for example, when executing the collection.Insert(document, safeMode) method Some useful SafeMode article: http://stackoverflow.com/questions/4604868/mongodb-c-sharp-safemode-official-driver Exception Handling The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is). As explained here https://groups.google.com/forum/?fromgroups#!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected". For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated. Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled. Useful Community Articles Comments about how MongoDB works and how that might affect your application: http://ethangunderson.com/blog/two-reasons-to-not-use-mongodb/ FourSquare using MongoDB had serious scalability problems: http://mashable.com/2010/10/07/mongodb-foursquare/ Is MongoDB a replacement for Memcached? http://www.quora.com/Is-MongoDB-a-good-replacement-for-Memcached/answer/Rick-Branson MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc: http://www.markus-gattol.name/ws/mongodb.html MongoDB Collection level locking support: https://jira.mongodb.org/browse/SERVER-1240 MongoDB performance tips: http://www.quora.com/MongoDB/What-are-some-best-practices-for-optimal-performance-of-MongoDB-particularly-for-queries-that-involve-multiple-documents Lessons learned migrating from SQL Server to MongoDB: http://www.wireclub.com/development/TqnkQwQ8CxUYTVT90/read MongoDB replication performance: http://benshepheard.blogspot.com.ar/2011/01/mongodb-replication-performance.html

    Read the article

  • Increase the size of Taskbar Preview Thumbnails in Windows 7

    - by Matthew Guay
    Taskbar thumbnail previews are incredibly useful in Windows 7, but for some users they may be too small.  Here’s a tool to help you make your taskbar thumbnail previews just like you want them. A few years ago we featured a tool to increase the size of your thumbnail previews in Windows Vista, but unfortunately this application doesn’t work correctly in Windows 7.  However, there is a new tool for Windows 7 that lets you customize your taskbar thumbnail previews even more in Windows 7.  With it, you can change almost anything about your taskbar thumbnail previews.  The default taskbar thumbnails are nice, but may be too small for users with vision problems or with very high resolution monitors.  Whatever your need, this is a great tool to make the thumbnails looks and work just like you want. Let’s get started Download the Windows 7 Taskbar Thumbnail Customizer (link below), and unzip the files.  Run the Windows 7 Taskbar Thumbnail Customizer when you’re done.  Simply double-click on it; you don’t need to run it as administrator. Now, you change the size, spacing, margin, and delay time of your taskbar thumbnails.  The Delay Time setting is very handy; to speed things up, we set it to 0 so there’s no delay between when you mouse-over a taskbar icon to when you see the thumbnail.  Simply drag the slider to the size (or time in the delay settings) you want, and click Apply settings.  Windows Explorer will automatically restart, and your new taskbar thumbnails will be ready to use. Here is the default Windows 7 thumbnail preview of a video playing in Media player: And here’s the taskbar thumbnail enlarged to 380px.  Now you can really watch a video from your taskbar thumbnail. The larger taskbar thumbnails show up a little different in Internet Explorer.  It shows a larger preview of your active tab, and smaller previews of your other tabs.  Notice also that Aero peek shows the tab you’re hovering over in Internet Explorer, but the tab name in IE’s toolbar doesn’t change to the one you’re previewing.   Here we increased the width between the thumbnails, while keeping the thumbnails at their default size.  This could be useful if you have trouble selecting the correct preview, and we can imagine it would be a very useful modification on touch screens. And, if you ever take your changes too far, and want to revert to your default Windows 7 taskbar thumbnail previews, simply run the Customizer again and select Restore Defaults.  Windows Explorer will restart again, and your taskbar thumbnails will be back to their default settings.   Conclusion This tool makes it safe and easy to change the size, spacing, and more of your taskbar thumbnail previews.  And since you can always revert to the default settings, you can experiment without fear of messing up your computer.  If you’d prefer to change the settings manually without using a dedicated application, here’s a list of the registry changes you can make to accomplish this by hand. Link Download the Windows 7 Taskbar Thumbnail Customizer from The Windows Club Vista Users: Increase Size of Windows Vista Taskbar Previews Similar Articles Productive Geek Tips Bounty(Paid!) for Increasing Windows Vista Taskbar Preview SizeGet Vista Taskbar Thumbnail Previews in Windows XPVista Style Popup Previews for Firefox TabsIncrease Size of Windows Vista Taskbar PreviewsWhat is dwm.exe And Why Is It Running? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

  • SharePoint 2010 Hosting :: SharePoint 2010 Custom Web Template

    - by mbridge
    SharePoint 2010 offers some changes and additions to the SharePoint 2007 approach. Site definitions and publishing providers remain largely the same, but site templates created from the SharePoint UI or SharePoint Designer are now saved to a .WSP file, the same solution deployment packaging file format used for deploying custom SharePoint solutions. Site Templates saved to a .WSP solution file can be imported into Visual Studio for additional customization. Introducing the WebTemplate Feature Element The WebTemplate element, introduced in SharePoint 2010, allows site templates to be defined and deployed as a Feature as part of a solution package. A WebTemplate element feature can be used to deploy site templates in either a Farm or Sandbox solution - without modification. If deployed as a Farm feature and solution, site templates will appear in the site collection provisioning page in Central Administration and can be used to provision new site collections, or within a Site Collection to create sub-sites. If deployed as a Site feature and Sandbox solution, site templates will appear within the site collection to support creating a root site or sub-sites. Creating a new WebTemplate Feature in Visual Studio 2010 In addition to supporting the ability to save and import Site Templates created from the SharePoint UI into Visual Studio for customization, it can also be used to create new site templates from scratch. In the following sample we will walk through how to create a new WebTemplate solution based on  a customized version of the out-of-box Blank Site. 1. Create a new Empty SharePoint Project in Visual Studio 2010. 2. Add a new Empty Element to the project. we like to create folders for each type of element in our solution, so in our sample, we have created a Web Templates folder, and then added the BLANKENT element. NOTE: The Elements folder MUST share the same name as the WebTemplate name property. 3. Open the empty Elements.xml and add the <WebTemplate /> element block. 4. Copy the default.aspx and ONET.XML files from the STS site definition location at 14\TEMPLATES\Site Templates\STS. We will customize the ONET.XML in the next section. Open the properties for each file and set the Deployment Type to ElementFile. This ensures the files are deployed with the Element when included in a Feature. 5. By default a new feature is added to the solution for you automatically when a new element is added to the solution. Rename and edit the feature as appropriate. Select Farm for the scope to deploy the WebTemplate to the entire farm, or Site for a sandboxed solution. Customize the ONET.XML At this point, you have a working WebTemplate solution that will deploy the identical site to the out-of-box Blank Site, however the ONET.XML supporting the STS site definition contains 3 configurations – essentially 3 separate site templates and can be simplified before customizing. In the following sample, we have trimmed the ONET.XML to the essentials for a single Site Template, and added references to the <SiteFeatures /> and <WebFeatures /> elements to include the SharePoint Standard and Enterprise features. We have left the top-level navigation bar, and the default page module intact, but removed all other extraneous markup.

    Read the article

  • Common business drivers that lead to creating and sustaining a project

    Common business drivers that lead to creating and sustaining a project include and are not limited to: cost reduction, increased return on investment (ROI), reduced time to market, increased speed and efficiency, increased security, and increased interoperability. These drivers primarily focus on streamlining and reducing cost to make a company more profitable with less overhead. According to Answers.com cost reduction is defined as reducing costs to improve profitability, and may be implemented when a company is having financial problems or prevent problems. ROI is defined as the amount of value received relative to the amount of money invested according to PayperclickList.com.  With the ever increasing demands on businesses to compete in today’s market, companies are constantly striving to reduce the time it takes for a concept to become a product and be sold within the global marketplace. In business, some people say time is money, so if a project can reduce the time a business process takes it in fact saves the company which is always good for the bottom line. The Social Security Administration states that data security is the protection of data from accidental or intentional but unauthorized modification, destruction. Interoperability is the capability of a system or subsystem to interact with other systems or subsystems. In my personal opinion, these drivers would not really differ for a profit-based organization, compared to a non-profit organization. Both corporate entities strive to reduce cost, and strive to keep operation budgets low. However, the reasoning behind why they want to achieve this does contrast. Typically profit based organizations strive to increase revenue and market share so that the business can grow. Alternatively, not-for-profit businesses are more interested in increasing their reach within communities whether it is to increase annual donations or invest in the lives of others. Success or failure of a project can be determined by one or more of these drivers based on the scope of a project and the company’s priorities associated with each of the drivers. In addition, if a project attempts to incorporate multiple drivers and is only partially successful, then the project might still be considered to be a success due to how close the project was to meeting each of the priorities. Continuous evaluation of the project could lead to a decision to abort a project, because it is expected to fail before completion. Evaluations should be executed after the completion of every software development process stage. Pfleeger notes that software development process stages include: Requirements Analysis and Definition System Design Program Design Program Implementation Unit Testing Integration Testing System Delivery Maintenance Each evaluation at every state should consider all the business drivers included in the scope of a project for how close they are expected to meet expectations. In addition, minimum requirements of acceptance should also be included with the scope of the project and should be reevaluated as the project progresses to ensure that the project makes good economic sense to continue. If the project falls below these benchmarks then the project should be put on hold until it does make more sense or the project should be aborted because it does not meet the business driver requirements.   References Cost Reduction Program. (n.d.). Dictionary of Accounting Terms. Retrieved July 19, 2009, from Answers.com Web site: http://www.answers.com/topic/cost-reduction-program Government Information Exchange. (n.d.). Government Information Exchange Glossary. Retrieved July 19, 2009, from SSA.gov Web site: http://www.ssa.gov/gix/definitions.html PayPerClickList.com. (n.d.). Glossary Term R - Pay Per Click List. Retrieved July 19, 2009, from PayPerClickList.com Web site: http://www.payperclicklist.com/glossary/termr.html Pfleeger, S & Atlee, J.(2009). Software Engineering: Theory and Practice. Boston:Prentice Hall Veluchamy, Thiyagarajan. (n.d.). Glossary « Thiyagarajan Veluchamy’s Blog. Retrieved July 19, 2009, from Thiyagarajan.WordPress.com Web site: http://thiyagarajan.wordpress.com/glossary/

    Read the article

  • Changes in licence in forked project what are my rights?

    - by Wes
    Hi I'm intrested in using the apparently now defunct app-mdi libray in a flex application for a paying customer. http://sourceforge.net/projects/appmdi/ It appears that the app-mdi project has been forked from flex-mdi and indeed the code has so much in common it would appear almost identical to the origional code. Now in the original source flex-mdi the following licence appears in the source code /* Copyright (c) 2007 FlexMDI Contributors. See: http://code.google.com/p/flexmdi/wiki/ProjectContributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ However in the app-mdi library on the same file the following licence appears. Copyright (c) 2010, TRUEAGILE All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the TRUEAGILE nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ Now I've no problem with the licence except for the line. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright notice in its entireity makes no sense in binary material. Specifically talking about redistobutions in the binary form. Finally the question is what exactly has to be shown on web clients who access softare that utilises this library? Also is changing the licence in this manner actually allowed?

    Read the article

  • Measuring ASP.NET and SharePoint output cache

    - by DigiMortal
    During ASP.NET output caching week in my local blog I wrote about how to measure ASP.NET output cache. As my posting was based on real work and real-life results then I thought that this posting is maybe interesting to you too. So here you can read what I did, how I did and what was the result. Introduction Caching is not effective without measuring it. As MVP Henn Sarv said in one of his sessions then you will get what you measure. And right he is. Lately I measured caching on local Microsoft community portal to make sure that our caching strategy is good enough in environment where this system lives. In this posting I will show you how to start measuring the cache of your web applications. Although the application measured is built on SharePoint Server publishing infrastructure, all those counters have same meaning as similar counters under pure ASP.NET applications. Measured counters I used Performance Monitor and the following performance counters (their names are similar on ASP.NET and SharePoint WCMS): Total number of objects added – how much objects were added to output cache. Total object discards – how much objects were deleted from output cache. Cache hit count – how many times requests were served by cache. Cache hit ratio – percent of requests served from cache. The first three counters are cumulative while last one is coefficient. You can use also other counters to measure the full effect of caching (memory, processor, disk I/O, network load etc before and after caching). Measuring process The measuring I describe here started from freshly restarted web server. I measured application during 12 hours that covered also time ranges when users are most active. The time range does not include late evening hours and night because there is nothing to measure during these hours. During measuring we performed no maintenance or administrative tasks on server. All tasks performed were related to usual daily content management and content monitoring. Also we had no advertisement campaigns or other promotions running at same time. The results You can see the results on following graphic.   Total number of objects added   Total object discards   Cache hit count   Cache hit ratio You can see that adds and discards are growing in same tempo. It is good because cache expires and not so popular items are not kept in memory. If there are more popular content then the these lines may have bigger distance between them. Cache hit count grows faster and this shows that more and more content is served from cache. In current case it shows that cache is filled optimally and we can do even better if we tune caches more. The site contains also pages that are discarded when some subsite changes (page was added/modified/deleted) and one modification may affect about four or five pages. This may also decrease cache hit count because during day the site gets about 5-10 new pages. Cache hit ratio is currently extremely good. The suggested minimum is about 85% but after some tuning and measuring I achieved 98.7% as a result. This is due to the fact that new pages are most often requested and after new pages are added the older ones are requested only sometimes. So they get discarded from cache and only some of these will return sometimes back to cache. Although this may also indicate the need for additional SEO work the result is very well in technical means. Conclusion Measuring ASP.NET output cache is not complex thing to do and you can start by measuring performance of cache as a start. Later you can move on and measure caching effect to other counters such as disk I/O, network, processors etc. What you have to achieve is optimal cache that is not full of items asked only couple of times per day (you can avoid this by not using too long cache durations). After some tuning you should be able to boost cache hit ratio up to at least 85%.

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • Configuring Multiple Instances of MySQL in Solaris 11

    - by rajeshr
    Recently someone asked me for steps to configure multiple instances of MySQL database in an Operating Platform. Coz of my familiarity with Solaris OE, I prepared some notes on configuring multiple instances of MySQL database on Solaris 11. Maybe it's useful for some: If you want to run Solaris Operating System (or any other OS of your choice) as a virtualized instance in desktop, consider using Virtual Box. To download Solaris Operating System, click here. Once you have your Solaris Operating System (Version 11) up and running and have Internet connectivity to gain access to the Image Packaging System (IPS), please follow the steps as mentioned below to install MySQL and configure multiple instances: 1. Install MySQL Database in Solaris 11 $ sudo pkg install mysql-51 2. Verify if the mysql is installed: $ svcs -a | grep mysql Note: Service FMRI will look similar to the one here: svc:/application/database/mysql:version_51 3. Prepare data file system for MySQL Instance 1 zfs create rpool/mysql zfs create rpool/mysql/data zfs set mountpoint=/mysql/data rpool/mysql/data 4. Prepare data file system for MySQL Instance 2 zfs create rpool/mysql/data2 zfs set mountpoint=/mysql/data rpool/mysql/data2 5. Change the mysql/datadir of the MySQL Service (SMF) to point to /mysql/data $ svcprop mysql:version_51 | grep mysql/data $ svccfg -s mysql:version_51 setprop mysql/data=/mysql/data 6. Create a new instance of MySQL 5.1 (a) Copy the manifest of the default instance to temporary directory: $ sudo cp /lib/svc/manifest/application/database/mysql_51.xml /var/tmp/mysql_51_2.xml (b) Make appropriate modifications on the XML file $ sudo vi /var/tmp/mysql_51_2.xml - Change the "instance name" section to a new value "version_51_2" - Change the value of property name "data" to point to the ZFS file system "/mysql/data2" 7. Import the manifest to the SMF repository: $ sudo svccfg import /var/tmp/mysql_51_2.xml 8. Before starting the service, copy the file /etc/mysql/my.cnf to the data directories /mysql/data & /mysql/data2. $ sudo cp /etc/mysql/my.cnf /mysql/data/ $ sudo cp /etc/mysql/my.cnf /mysql/data2/ 9. Make modifications to the my.cnf in each of the data directories as required: $ sudo vi /mysql/data/my.cnf Under the [client] section port=3306 socket=/tmp/mysql.sock ---- ---- Under the [mysqld] section port=3306 socket=/tmp/mysql.sock datadir=/mysql/data ----- ----- server-id=1 $ sudo vi /mysql/data2/my.cnf Under the [client] section port=3307 socket=/tmp/mysql2.sock ----- ----- Under the [mysqld] section port=3307 socket=/tmp/mysql2.sock datadir=/mysql/data2 ----- ----- server-id=2 10. Make appropriate modification to the startup script of MySQL (managed by SMF) to point to the appropriate my.cnf for each instance: $ sudo vi /lib/svc/method/mysql_51 Note: Search for all occurences of mysqld_safe command and modify it to include the --defaults-file option. An example entry would look as follows: ${MySQLBIN}/mysqld_safe --defaults-file=${MYSQLDATA}/my.cnf --user=mysql --datadir=${MYSQLDATA} --pid=file=${PIDFILE} 11. Start the service: $ sudo svcadm enable mysql:version_51_2 $ sudo svcadm enable mysql:version_51 12. Verify that the two services are running by using: $ svcs mysql 13. Verify the processes: $ ps -ef | grep mysqld 14. Connect to each mysqld instance and verify: $ mysql --defaults-file=/mysql/data/my.cnf -u root -p $ mysql --defaults-file=/mysql/data2/my.cnf -u root -p Some references for Solaris 11 newbies Taking your first steps with Solaris 11 Introducing the basics of Image Packaging System Service Management Facility How To Guide For a detailed list of official educational modules available on Solaris 11, please visit here For MySQL courses from Oracle University access this page.

    Read the article

  • Hidden exceptions

    - by user12617285
    Occasionally you may find yourself in a Java application environment where exceptions in your code are being caught by the application framework and either silently swallowed or converted into a generic exception. Either way, the potentially useful details of your original exception are inaccessible. Wouldn't it be nice if there was a VM option that showed the stack trace for every exception thrown, whether or not it's caught? In fact, HotSpot includes such an option: -XX:+TraceExceptions. However, this option is only available in a debug build of HotSpot (search globals.hpp for TraceExceptions). And based on a quick skim of the HotSpot source code, this option only prints the exception class and message. A more useful capability would be to have the complete stack trace printed as well as the code location catching the exception. This is what the various TraceException* options in in Maxine do (and more). That said, there is a way to achieve a limited version of the same thing with a stock standard JVM. It involves the use of the -Xbootclasspath/p non-standard option. The trick is to modify the source of java.lang.Exception by inserting the following: private static final boolean logging = System.getProperty("TraceExceptions") != null; private void log() { if (logging && sun.misc.VM.isBooted()) { printStackTrace(); } } Then every constructor simply needs to be modified to call log() just before returning: public Exception(String message) { super(message); log(); } public Exception(String message, Throwable cause) { super(message, cause); log(); } // etc... You now need to compile the modified Exception.java source and prepend the resulting class to the boot class path as well as add -DTraceExceptions to your java command line. Here's a console session showing these steps: % mkdir boot % javac -d boot Exception.java % java -DTraceExceptions -Xbootclasspath/p:boot -cp com.oracle.max.vm/bin test.output.HelloWorld java.util.zip.ZipException: error in opening zip file at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:127) at java.util.jar.JarFile.(JarFile.java:135) at java.util.jar.JarFile.(JarFile.java:72) at sun.misc.URLClassPath$JarLoader.getJarFile(URLClassPath.java:646) at sun.misc.URLClassPath$JarLoader.access$600(URLClassPath.java:540) at sun.misc.URLClassPath$JarLoader$1.run(URLClassPath.java:607) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:599) at sun.misc.URLClassPath$JarLoader.(URLClassPath.java:583) at sun.misc.URLClassPath$3.run(URLClassPath.java:333) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath.getLoader(URLClassPath.java:322) at sun.misc.URLClassPath.getLoader(URLClassPath.java:299) at sun.misc.URLClassPath.getResource(URLClassPath.java:168) at java.net.URLClassLoader$1.run(URLClassLoader.java:194) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at sun.misc.Launcher$ExtClassLoader.findClass(Launcher.java:229) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at java.lang.ClassLoader.loadClass(ClassLoader.java:295) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) java.security.PrivilegedActionException at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:599) at sun.misc.URLClassPath$JarLoader.(URLClassPath.java:583) at sun.misc.URLClassPath$3.run(URLClassPath.java:333) at java.security.AccessController.doPrivileged(Native Method) at sun.misc.URLClassPath.getLoader(URLClassPath.java:322) ... It's worth pointing out that this is not as useful as direct VM support for tracing exceptions. It has (at least) the following limitations: The trace is shown for every exception, whether it is thrown or not. It only applies to subclasses of java.lang.Exception as there appears to be bootstrap issues when the modification is applied to Throwable.java. It does not show you where the exception was caught. It involves overriding a class in rt.jar, something should never be done in a non-development environment.

    Read the article

  • Part 9: EBS Customizations, how to track

    - by volker.eckardt(at)oracle.com
    In the previous blogs we were concentrating on the preparation tasks. We have defined standards, we know about the tools and techniques we will start with. Additionally, we have defined the modification strategy, and how to handle such topics best. Now we are ready to take the requirements! Such requirements coming over in spreadsheets, word files (like GAP documents), or in any other format. As we have to assign some attributes, we start numbering all that and assign a short name to each of these requirements (=CEMLI reference). We may also have already a Functional person assigned, and we might involve someone from the tech team to estimate, and we like to assign a status such as 'planned', 'estimated' etc. All these data are usually kept in spreadsheets, but I would put them into a database (yes, I am from Oracle :). If you don't have any good looking and centralized application already, please give a try with Oracle APEX. It should be up and running in a day and the imported sheets are than manageable concurrently!  For one of my clients I have created this CEMLI-DB; in between enriched with a lot of additional functionality, but initially it was just a simple centralized CEMLI tracking application. Why I am pointing out again the centralized method to manage such data? Well, your data quality will dramatically increase, if you let your project members see (also review and update) "your" data.  APEX allows you to filter, sort, print, and also export. And if you can spend some time to define proper value lists, everyone will gain from. APEX allows you to work in 'agile' mode, means you can improve your application step by step. Let's say you like to reference a document, or even upload the same, you can do that. Or, you need to classify the CEMLIs by release, just add this release field, same for business area or CEMLI type. One CEMLI record may then look like this: Prepare one or two (online) reports, to be ready to present your "workload" to the project management. Use such extracts also when you work offline (to prioritize etc.). But as soon as you are again connected, feed the data back into the central application. Note: I have combined this application with an additional issue tracker.  Here the most important element is the CEMLI reference, which acts as link to any other application (if you are not using APEX also as issue tracker :).  Please spend a minute to define such a reference (see blog Part 8: How to name Customizations).   Summary: Building the bridge from Gap analyse to the development has to be done in a controlled way. Usually the information is provided differently, but it is suggested to collect all requirements centrally. Oracle APEX is a great solution to enter and maintain such information in a structured, but flexible way. APEX helped me a lot to work with distributed development teams during the complete development cycle.

    Read the article

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • Securing Flexfield Value Sets in EBS 12.2

    - by Sara Woodhull
    Release 12.2 includes a new feature: flexfield value set security. This new feature gives you additional options for ensuring that different administrators have non-overlapping responsibilities, which in turn provides checks and balances for sensitive activities.  Separation of Duties (SoD) is one of the key concepts of internal controls and is a requirement for many regulations including: Sarbanes-Oxley (SOX) Act Health Insurance Portability and Accountability Act (HIPAA) European Union Data Protection Directive. Its primary intent is to put barriers in place to prevent fraud or theft by an individual acting alone. Implementing Separation of Duties requires minimizing the possibility that users could modify data across application functions where the users should not normally have access. For flexfields and report parameters in Oracle E-Business Suite, values in value sets can affect functionality such as the rollup of accounting data, job grades used at a company, and so on. Controlling access to the creation or modification of value set values can be an important piece of implementing Separation of Duties in an organization. New Flexfield Value Set Security feature Flexfield value set security allows system administrators to restrict users from viewing, adding or updating values in specific value sets. Value set security enables role-based separation of duties for key flexfields, descriptive flexfields, and report parameters. For example, you can set up value set security such that certain users can view or insert values for any value set used by the Accounting Flexfield but no other value sets, while other users can view and update values for value sets used for any flexfields in Oracle HRMS. You can also segregate access by Operating Unit as well as by role or responsibility.Value set security uses a combination of data security and role-based access control in Oracle User Management. Flexfield value set security provides a level of security that is different from the previously-existing and similarly-named features in Oracle E-Business Suite: Function security controls whether a user has access to a specific page or form, as well as what operations the user can do in that screen. Flexfield value security controls what values a user can enter into a flexfield segment or report parameter (by responsibility) during routine data entry in many transaction screens across Oracle E-Business Suite. Flexfield value set security (this feature, new in Release 12.2) controls who can view, insert, or update values for a particular value set (by flexfield, report, or value set) in the Segment Values form (FNDFFMSV). The effect of flexfield value set security is that a user of the Segment Values form will only be able to view those value sets for which the user has been granted access. Further, the user will be able to insert or update/disable values in that value set if the user has been granted privileges to do so.  Flexfield value set security affects independent, dependent, and certain table-validated value sets for flexfields and report parameters. Initial State of the Feature upon Upgrade Because this is a new security feature, it is turned on by default.  When you initially install or upgrade to Release 12.2.2, no users are allowed to view, insert or update any value set values (users may even think that their values are missing or invalid because they cannot see the values).  You must explicitly set up access for specific users by enabling appropriate grants and roles for those users.We recommend using flexfield value set security as part of a comprehensive Separation of Duties strategy. However, if you choose not to implement flexfield value set security upon upgrading to or installing Release 12.2, you can enable backwards compatibility--users can access any value sets if they have access to the Values form--after you upgrade. The feature does not affect day-to-day transactions that use flexfields.  However, you must either set up specific grants and roles or enable backwards compatibility before users can create new values or update or disable existing values. For more information, see: Release 12.2 Flexfield Value Set Security Documentation Update for Patch 17305947:R12.FND.C (Document 1589204.1) R12.2 TOI: Implement and Use Application Object Library (AOL) - Flexfields Security and Separation of Duties for Value Sets (recorded training)

    Read the article

  • IPgallery banks on Solaris SPARC

    - by Frederic Pariente
    IPgallery is a global supplier of converged legacy and Next Generation Networks (NGN) products and solutions, including: core network components and cloud-based Value Added Services (VAS) for voice, video and data sessions. IPgallery enables network operators and service providers to offer advanced converged voice, chat, video/content services and rich unified social communications in a combined legacy (fixed/mobile), Over-the-Top (OTT) and Social Community (SC) environments for home and business customers. Technically speaking, this offer is a scalable and robust telco solution enabling operators to offer new services while controlling operating expenses (OPEX). In its solutions, IPgallery leverages the following Oracle components: Oracle Solaris, Netra T4 and SPARC T4 in order to provide a competitive and scalable solution without the price tag often associated with high-end systems. Oracle Solaris Binary Application Guarantee A unique feature of Oracle Solaris is the guaranteed binary compatibility between releases of the Solaris OS. That means, if a binary application runs on Solaris 2.6 or later, it will run on the latest release of Oracle Solaris.  IPgallery developed their application on Solaris 9 and Solaris 10 then runs it on Solaris 11, without any code modification or rebuild. The Solaris Binary Application Guarantee helps IPgallery protect their long-term investment in the development, training and maintenance of their applications. Oracle Solaris Image Packaging System (IPS) IPS is a new repository-based package management system that comes with Oracle Solaris 11. It provides a framework for complete software life-cycle management such as installation, upgrade and removal of software packages. IPgallery leverages this new packaging system in order to speed up and simplify software installation for the R&D and production environments. Notably, they use IPS to deliver Solaris Studio 12.3 packages as part of the rapid installation process of R&D environments, and during the production software deployment phase, they ensure software package integrity using the built-in verification feature. Solaris IPS thus improves IPgallery's time-to-market with a faster, more reliable software installation and deployment in production environments. Extreme Network Performance IPgallery saw a huge improvement in application performance both in CPU and I/O, when running on SPARC T4 architecture in compared to UltraSPARC T2 servers.  The same application (with the same activation environment) running on T2 consumes 40%-50% CPU, while it consumes only 10% of the CPU on T4. The testing environment comprised of: Softswitch (Call management), TappS (Telecom Application Server) and Billing Server running on same machine and initiating various services in capacity of 1000 CAPS (Call Attempts Per Second). In addition, tests showed a huge improvement in the performance of the TCP/IP stack, which reduces network layer processing and in the end Call Attempts latency. Finally, there is a huge improvement within the file system and disk I/O operations; they ran all tests with maximum logging capability and it didn't influence any benchmark values. "Due to the huge improvements in performance and capacity using the T4-1 architecture, IPgallery has engineered the solution with less hardware.  This means instead of deploying the solution on six T2-based machines, we will deploy on 2 redundant machines while utilizing Oracle Solaris Zones and Oracle VM for higher availability and virtualization" Shimon Lichter, VP R&D, IPgallery In conclusion, using the unique combination of Oracle Solaris and SPARC technologies, IPgallery is able to offer solutions with much lower TCO, while providing a higher level of service capacity, scalability and resiliency. This low-OPEX solution enables the operator, the end-customer, to deliver a high quality service while maintaining high profitability.

    Read the article

  • Automated SSRS deployment with the RS utility

    - by Stacy Vicknair
    If you’re familiar with SSRS and development you are probably aware of the SSRS web services. The RS utility is a tool that comes with SSRS that allows for scripts to be executed against against the SSRS web service without needing to create an application to consume the service. One of the better benefits of using this format rather than writing an application is that the script can be modified by others who might be involved in the creation and addition of scripts or management of the SSRS environment.   Reporting Services Scripter Jasper Smith from http://www.sqldbatips.com created Reporting Services Scripter to assist with the created of a batch process to deploy an entire SSRS environment. The helper scripts below were created through the modification of his generated scripts. Why not just use this tool? You certainly can. For me, the volume of scripts generated seems less maintainable than just using some common methods extracted from these scripts and creating a deployment in a single script file. I would, however, recommend this as a product if you do not think that your environment will change drastically or if you do not need to deploy with a higher level of control over the deployment. If you just need to replicate, this tool works great. Executing with RS.exe Executing a script against rs.exe is fairly simple. The Script Half the battle is having a starting point. For the scripting I needed to do the below is the starter script. A few notes: This script assumes integrated security. This script assumes your reports have one data source each. Both of the above are just what made sense for my scenario and are definitely modifiable to accommodate your needs. If you are unsure how to change the scripts to your needs, I recommend Reporting Services Scripter to help you understand how the differences. The script has three main methods: CreateFolder, CreateDataSource and CreateReport. Scripting the server deployment is just a process of recreating all of the elements that you need through calls to these methods. If there are additional elements that you need to deploy that aren’t covered by these methods, again I suggest using Reporting Services Scripter to get the code you would need, convert it to a repeatable method and add it to this script! Public Sub Main() CreateFolder("/", "Data Sources") CreateFolder("/", "My Reports") CreateDataSource("/Data Sources", "myDataSource", _ "Data Source=server\instance;Initial Catalog=myDatabase") CreateReport("/My Reports", _ "MyReport", _ "C:\myreport.rdl", _ True, _ "/Data Sources", _ "myDataSource") End Sub   Public Sub CreateFolder(parent As String, name As String) Dim fullpath As String = GetFullPath(parent, name) Try RS.CreateFolder(name, parent, GetCommonProperties()) Console.WriteLine("Folder created: {0}", name) Catch e As SoapException If e.Detail.Item("ErrorCode").InnerText = "rsItemAlreadyExists" Then Console.WriteLine("Folder {0} already exists and cannot be overwritten", fullpath) Else Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End If End Try End Sub   Public Sub CreateDataSource(parent As String, name As String, connectionString As String) Try RS.CreateDataSource(name, parent,False, GetDataSourceDefinition(connectionString), GetCommonProperties()) Console.WriteLine("DataSource {0} created successfully", name) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub   Public Sub CreateReport(parent As String, name As String, location As String, overwrite As Boolean, dataSourcePath As String, dataSourceName As String) Dim reportContents As Byte() = Nothing Dim warnings As Warning() = Nothing Dim fullpath As String = GetFullPath(parent, name)   'Read RDL definition from disk Try Dim stream As FileStream = File.OpenRead(location) reportContents = New [Byte](stream.Length-1) {} stream.Read(reportContents, 0, CInt(stream.Length)) stream.Close()   warnings = RS.CreateReport(name, parent, overwrite, reportContents, GetCommonProperties())   If Not (warnings Is Nothing) Then Dim warning As Warning For Each warning In warnings Console.WriteLine(Warning.Message) Next warning Else Console.WriteLine("Report: {0} published successfully with no warnings", name) End If   'Set report DataSource references Dim dataSources(0) As DataSource   Dim dsr0 As New DataSourceReference dsr0.Reference = dataSourcePath Dim ds0 As New DataSource ds0.Item = CType(dsr0, DataSourceDefinitionOrReference) ds0.Name=dataSourceName dataSources(0) = ds0     RS.SetItemDataSources(fullpath, dataSources)   Console.Writeline("Report DataSources set successfully")       Catch e As IOException Console.WriteLine(e.Message) Catch e As SoapException Console.WriteLine("Error : " + e.Detail.Item("ErrorCode").InnerText + " (" + e.Detail.Item("Message").InnerText + ")") End Try End Sub     Public Function GetCommonProperties() As [Property]() 'Common CatalogItem properties Dim descprop As New [Property] descprop.Name = "Description" descprop.Value = "" Dim hiddenprop As New [Property] hiddenprop.Name = "Hidden" hiddenprop.Value = "False"   Dim props(1) As [Property] props(0) = descprop props(1) = hiddenprop Return props End Function   Public Function GetDataSourceDefinition(connectionString as String) Dim definition As New DataSourceDefinition definition.CredentialRetrieval = CredentialRetrievalEnum.Integrated definition.ConnectString = connectionString definition.Enabled = True definition.EnabledSpecified = True definition.Extension = "SQL" definition.ImpersonateUser = False definition.ImpersonateUserSpecified = True definition.Prompt = "Enter a user name and password to access the data source:" definition.WindowsCredentials = False definition.OriginalConnectStringExpressionBased = False definition.UseOriginalConnectString = False Return definition End Function   Private Function GetFullPath(parent As String, name As String) As String If parent = "/" Then Return parent + name Else Return parent + "/" + name End If End Function

    Read the article

  • Enterprise Integration: Can Companies Afford It?

    - by Ralph Wheaton
    Each year, my company holds a global sales conference where employees and partners from around the world some together to collaborate, share knowledge and ideas and learn about future plans.  As a member of the professional services division, several of us had been asked to make a presentation, an elevator pitch in 3 minutes or less that relates to a success we have worked on or directly relates to our tag (that is, our primary technology focus).  Mine happens to be Enterprise Integration as it relates Business Intelligence.  I found it rather difficult to present that pitch in a short amount of time and had to pare it down.  At any rate, in just a little over 3 minutes, this is the presentation I submitted.  Here is a link to the full presentation video in WMV format.   Many companies today subscribe to a buy versus build mentality in an attempt to drive down costs and improve time to implementation. Sometimes this makes sense, especially as it relates to specialized software or software that performs a small number of tasks extremely well. However, if not carefully considered or planned out, this oftentimes leads to multiple disparate systems with silos of data or multiple versions of the same data. For instance, client data (contact information, addresses, phone numbers, opportunities, sales) stored in your CRM system may not play well with Accounts Receivables. Employee data may be stored across multiple systems such as HR, Time Entry and Payroll. Other data (such as member data) may not originate internally, but be provided by multiple outside sources in multiple formats. And to top it all off, some data may have to be manually entered into multiple systems to keep it all synchronized. When left to grow out of control like this, overall performance is lacking, stability is questionable and maintenance is frequent and costly. Worse yet, in many cases, this topology, this hodgepodge of data creates a reporting nightmare. Decision makers are forced to try to put together pieces of the puzzle attempting to find the information they need, wading through multiple systems to find what they think is the single version of the truth. More often than not, they find they are missing pieces, pieces that may be crucial to growing the business rather than closing the business. across applications. Master data owners are defined to establish single sources of data (such as the CRM system owns client data). Other systems subscribe to the master data and changes are replicated to subscribers as they are made. This can be one way (no changes are allowed on the subscriber systems) or bi-directional. But at all times, the master data owner is current or up to date. And all data, whether internal or external, use the same processes and methods to move data from one place to another, leveraging the same validations, lookups and transformations enterprise wide, eliminating inconsistencies and siloed data. Once implemented, an enterprise integration solution improves performance and stability by reducing the number of moving parts and eliminating inconsistent data. Overall maintenance costs are mitigated by reducing touch points or the number of places that require modification when a business rule is changed or another data element is added. Most importantly, however, now decision makers can easily extract and piece together the information they need to grow their business, improve customer satisfaction and so on. So, in implementing an enterprise integration solution, companies can position themselves for the future, allowing for easy transition to data marts, data warehousing and, ultimately, business intelligence. Along this path, companies can achieve growth in size, intelligence and complexity. Truly, the question is not can companies afford to implement an enterprise integration solution, but can they afford not to.   Ralph Wheaton Microsoft Certified Technology Specialist Microsoft Certified Professional Developer Microsoft VTS-P BizTalk, .Net

    Read the article

  • Developer Training – 6 Online Courses to Learn SQL Server, MySQL and Technology

    - by Pinal Dave
    Video courses are the next big thing and I am so happy that I have so far authored 6 different video courses with Pluralsight. Here is the list of the courses. I have listed all of my video courses over here. Note: If you click on the courses and it does not open, you need to login to Pluralsight with a valid username and password or sign up for a FREE trial. Please leave a comment with your favorite course in the comment section. Random 10 winners will get surprise gift via email. Bonus: If you list your favorite module from the course site. SQL Server Performance: Introduction to Query Tuning SQL Server performance tuning is an in-depth topic, and an art to master. A key component of overall application performance tuning is query tuning. Writing queries in an efficient manner, and making sure they execute in the most optimal way possible, is always a challenge. The basics revolve around the details of how SQL Server carries out query execution, so the optimizations explored in this course follow along the same lines. Click to View Course SQL Server Performance: Indexing Basics Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side of the indexes. To master the art of performance tuning one has to understand the fundamentals of the indexes and the best practices associated with the same. This course is for every DBA and Developer who deals with performance tuning and wants to use indexes to improve the performance of the server. Click to View Course SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. This course is for anyone working with SQL Server databases who wants to improve her knowledge and understanding of this complex platform. Click to View Course MySQL Fundamentals MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Click to View Course Building a Successful Blog Expressing yourself is the most common behavior of humans. Blogging has made easy to express yourself. Just like a letter or book has a structure and formula, blogging also has structure and formula. In this introductory course on blogging we will go over a few of the basics of blogging and show the way to get started with blogging immediately. If you already have a blog, this course will be even more relevant as this will discuss many of the common questions and issue you face in your blogging routine. Click to View Course Introduction to ColdFusion ColdFusion is rapid web application development platform. In this course you will learn the basics of how to use ColdFusion platform and rapidly develop web sites. The course begins with learning basics of ColdFusion Markup Language and moves to common development language practices. From there we move to frequent database operations and advanced concepts of Forms, Sessions and Cookies. The last module sums up all the concepts covered in the course with sample application. Click to View Course Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • How To Customize Wallpaper in Windows 7 Starter Edition

    - by Asian Angel
    If you have the Starter Edition of Windows 7 installed on your netbook you may be sick of looking at the default wallpaper. With Starter Background Changer you can access other customization options with ease. Before There is not a lot that you can say about the singular default wallpaper included with the Starter Edition…it just kind of sits there all boring like. Installing Starter Background Changer Since the installer part of the program is in French we have the entire set of install windows shown here with the appropriate buttons highlighted to get you through the whole process without any problems. Using Starter Background Changer Once the installation process has finished you will simply see a quiet screen with no desktop icons or Start Menu entries visible. Now if you are wondering at this point “Did the program finish installing or did it install at all?” the answer is yes. Right click on your desktop and you will notice a new entry on the Context Menu…the same one that is included in the other editions but not Starter. Time to have some fun… The Personalization Window will open maximized but we have reduced it here for our screenshots. You have four regular categories to choose from in the lower part of the window: Wallpaper, Colors, Sounds, & Screensavers. The first category that we chose for our example was Wallpaper. As you can see here the main display area (My Collection) has no wallpapers showing at the moment. You can use the drop-down menu to access your My Pictures Folder or browse for a different location. Notice that you can choose how the image fills the screen and set up a timed wallpaper slideshow at the bottom. Any picture (or pictures) selected will be added to the My Collection display for easy access the next time you open the window. Once you choose a picture click on Validate the modification to set the wallpaper for your desktop and return to the main window. When you return to the main window you will see a preview for your selection. At this point you can simply close the window or make further adjustments in the other categories. Starter Background Changer provides easy one-stop access to other customization areas. We started off with Colors… Followed by Sounds… And finally Screensavers. Before you do close the main window you can take a quick look at the Options if desired. We did set Optimization of the images to High on our system. Quick and easy wallpaper satisfaction. We did pin the Program Window to our Taskbar…nice if you prefer this method as opposed to the Desktop Context Menu. Conclusion If you have been longing for a way to change the wallpaper in Windows 7 Starter Edition then you will definitely want to give this program a try. Goodbye boring default wallpaper! For more wonderful ways to customize your Windows 7 Started Edition be sure to read our article here. Links Download Starter Background Changer Similar Articles Productive Geek Tips Awesome Desktop Wallpapers: The Windows 7 EditionWindows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Starscape Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >