Search Results

Search found 3750 results on 150 pages for 'sales pipeline visibility'.

Page 57/150 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to use multiple database adapter to query involving tables from different databases?

    - by understack
    I've 2 databases, which are set up as mentioned here. How can I write a SQL query which involves database_1.table_1 and database_2.table_1 ? E.g. consider this query $sql = "SELECT distinct database_1.users.id, database_1.users.name FROM database_1.users, database_2.sales WHERE database_2.sales.user_id = database_1.users.id"; How this query could be written using multiple db adapter? Edit I'm using 2 databases, because this way I can change actual database names in application.ini. Is there any other way I can change database names without changing sql queries?

    Read the article

  • How to calculate Centered Moving Average of a set of data in Hadoop Map-Reduce?

    - by 100gods
    I want to calculate Centered Moving average of a set of Data . Example Input format : quarter | sales Q1'11 | 9 Q2'11 | 8 Q3'11 | 9 Q4'11 | 12 Q1'12 | 9 Q2'12 | 12 Q3'12 | 9 Q4'12 | 10 Mathematical Representation of data and calculation of Moving average and then centered moving average Period Value MA Centered 1 9 1.5 2 8 2.5 9.5 3 9 9.5 3.5 9.5 4 12 10.0 4.5 10.5 5 9 10.750 5.5 11.0 6 12 6.5 7 9 I am stuck with the implementation of RecordReader which will provide mapper sales value of a year i.e. of four quarter. The RecordReader Problem Question Thread Thanks

    Read the article

  • Composing adaptors in Boost::range

    - by bruno nery
    I started playing with Boost::Range in order to have a pipeline of lazy transforms in C++]1. My problem now is how to split a pipeline in smaller parts. Suppose I have: int main(){ auto map = boost::adaptors::transformed; // shorten the name auto sink = generate(1) | map([](int x){ return 2*x; }) | map([](int x){ return x+1; }) | map([](int x){ return 3*x; }); for(auto i : sink) std::cout << i << "\n"; } And I want to replace the first two maps with a magic_transform, i.e.: int main(){ auto map = boost::adaptors::transformed; // shorten the name auto sink = generate(1) | magic_transform() | map([](int x){ return 3*x; }); for(auto i : sink) std::cout << i << "\n"; } How would one write magic_transform? I looked up Boost::Range's documentation, but I can't get a good grasp of it.

    Read the article

  • DOS Batch file - Copy file based on filename elements

    - by user1848356
    I need to sort alot of files based on their filename. I would like to use a batch file to do it. I do know what I want but I am not sure of the correct syntax. Example of filenames I work with: (They are all in the same directory originally) 2012_W34_Sales_Store001.pdf 2012_W34_Sales_Store002.pdf 2012_W34_Sales_Store003.pdf 2012_November_Sales_Store001.pdf 2012_November_Sales_Store002.pdf 2012_November_Sales_Store003.pdf I would like to extract the information that are located between the "_" signs and put them into a different variable each time. The lenght of the informations contained between the _ signs will be different everytime. Example: var1="2012" var2="W34" (or November) var3="Sales" var4="001" If I am able to do this, I could then copy the files to the appropriate directory using move %var1%_%var2%_%var3%_%var4%.pdf z:\%var3%\%var4%\%var1%\%var2% It would need to loop because I have Store001 to Store050. Also, there are not only Sales report, many others are available. I hope I am clear. Please help me realize this batchfile!

    Read the article

  • Is it possible to submit an update for an app on the AppStore without losing reviews?

    - by Frank Krueger
    Whenever I submit an update for me app, the number of reviews visible for it drops to 0. If the customer bothers to click through they can see the previous versions' reviews... But the damage has been done. Since the app has 0 stars, I see a significant drop in sales. It takes the app a good week of earning new reviews to restore sales. I usually version the app as 1.1, 1.2, 1.3 etc. Is there something I can do during the update process to prevent losing reviews like this?

    Read the article

  • alternative to lag SQL command

    - by mahen
    I have a table which has a table like this. Month-----Book_Type-----sold_in_Dollars Jan----------A------------ 100 Jan----------B------------ 120 Feb----------A------------ 50 Mar----------A------------ 60 Mar----------B------------ 30 and so on I have to calculate the expected sales for each month and book type based on the last 2 months sales. So for March and type A it would be (100+50)/2 = 75 For March and type B it is 120/1 since no data for Feb is there. I was trying to use the lag function but it wouldn't work since there is data missing in a few rows. Any ideas on this?

    Read the article

  • file_get_contents not displaying any data

    - by Lance
    I'm doing $info = file_get_contents('http://USERNAME:[email protected]/v2/sales?client_key=CLIENT_KEY'); $info returns nothing. However, when I put the URL into my browser, JSON appears. I tried using cURL without any luck. $data = array('client_key' => 'CLIENT_KEY'); $link = "http://api.appfigures.com/v2/sales"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $link); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POST, count($data)); curl_setopt($ch, CURLOPT_POSTFIELDS, http_build_query($data)); curl_setopt($ch, CURLOPT_USERPWD, "username:password"); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC); $jsonData = curl_exec($ch); curl_close($ch); print_r($jsonData); Thanks

    Read the article

  • How to use composition from another class to set Name?

    - by user1874549
    I have some of the code here. I am trying to use one class to reference another so I may obtain the first name of the person. I want the firstName in main class to work but IDE mentions the variable isn't found. Will replacing 'first' with 'firstName' work? Main class: public BasePlusCommissionEmployee( String first, String last, String ssn, double sales, double rate, double salary) { cE = new CommissionEmployee( first, last, ssn, sales, rate ); setBaseSalary( salary ); } public void setFirstName(String firstName) { // Trying to get this to work... cE.setFirstName(first); } SubClass: private String firstName; public void setFirstName( String first ) { firstName = first; }

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • events not getting fired properly

    - by prince23
    hi, this is my xaml code. datagrid within another data grid. <sdk:DataGrid x:Name="dgLevel1" AutoGenerateColumns="False" VerticalAlignment="Top" IsReadOnly="True" Margin="12,12,0,0" RowDetailsVisibilityChanged="dgLevel1_RowDetailsVisibilityChanged" SelectionMode="Extended" RowDetailsVisibilityMode="VisibleWhenSelected" Height="412" HorizontalAlignment="Left" Width="816"> <sdk:DataGrid.Columns> <sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button x:Name="myButton" Width="24" Height="24" Click="ExpandLevel1_Click"> <Image x:Name="imgLevel1" Source="Images/detail.JPG" Stretch="None"/> </Button> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn Header="Actual" Visibility="Collapsed"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate > <sdk:Label Content="{Binding UniqueName}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTextColumn Binding="{Binding Name}" Header="Name" Width="550" /> <!--<sdk:DataGridTextColumn Binding="{Binding UniqueName}" Visibility="Collapsed"/>--> <sdk:DataGridTemplateColumn Header="Actual" Width="80" > <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate > <sdk:Label Content="{Binding Age}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> </sdk:DataGrid.Columns> <sdk:DataGrid.RowDetailsTemplate> <DataTemplate> <StackPanel Width="805"> <sdk:DataGrid x:Name="dgLevel2" Width="797" Margin="17,0,0,0" HeadersVisibility ="None" AutoGenerateColumns="False" HorizontalAlignment="Center" IsReadOnly="True" RowDetailsVisibilityChanged="dgLevel2_RowDetailsVisibilityChanged" SelectionMode="Extended" RowDetailsVisibilityMode="VisibleWhenSelected"> <sdk:DataGrid.Columns> <sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button x:Name="myButton" Width="24" Height="30" Click="ExpandLevel2_Click"> <Image x:Name="imgLevel2" Source="Images/detail.JPG" Stretch="None"/> </Button> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTextColumn Binding="{Binding School}" Width="528" /> <sdk:DataGridTextColumn Binding="{Binding College}" Visibility="Collapsed" /> <sdk:DataGridTemplateColumn Header="Actual" Width="80"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate > <sdk:Label Content="{Binding DOB}" /> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> </sdk:DataGrid.Columns> </sdk:DataGrid> </StackPanel> </DataTemplate> </sdk:DataGrid.RowDetailsTemplate> </sdk:DataGrid> i have 2 data grid and have 2 image buttons in both the grid . but the event which is in datagrid ExpandLevel1 _Click and ExpandLevel2 _Click is not getting fired properly. some times get fired some times no when i click the button first this event gets fired then ExpandLevel1_Click then **dgLevel1_RowDetailsVisibilityChanged . same thing is happening for the datagrid 2 ExpandLevel2_Click then dgLevel2_RowDetailsVisibilityChanged** there are scenario where first datagrid event gets fired first then button click events why is this happening .is there any solution for this looking forward an solutions thanks in advance. prince

    Read the article

  • Prevent ListBox from focusing but leave ListBoxItem(s) focusable (wpf)

    - by modosansreves
    Here is what happens: I have a listbox with items. Listbox has focus. Some item (say, 5th) is selected (has a blue background), but has no 'border'. When I press 'Down' key, the focus moves from ListBox to the first ListBoxItem. (What I want is to make 6th item selected, regardless of the 'border') When I navigate using 'Tab', the Listbox never receives the focus again. But when the collection is emptied and filled again, ListBox itself gets focus, pressing 'Down' moves the focus to the item. How to prevent ListBox from gaining focus? P.S. listBox1.SelectedItem is my own class, I don't know how to make ListBoxItem out of it to .Focus() it. EDIT: the code Xaml: <UserControl.Resources> <me:BooleanToVisibilityConverter x:Key="visibilityConverter"/> <me:BooleanToItalicsConverter x:Key="italicsConverter"/> </UserControl.Resources> <ListBox x:Name="lbItems"> <ListBox.ItemTemplate> <DataTemplate> <Grid> <ProgressBar HorizontalAlignment="Stretch" VerticalAlignment="Bottom" Visibility="{Binding Path=ShowProgress, Converter={StaticResource visibilityConverter}}" Maximum="1" Margin="4,0,0,0" Value="{Binding Progress}" /> <TextBlock Text="{Binding Path=VisualName}" FontStyle="{Binding Path=IsFinished, Converter={StaticResource italicsConverter}}" Margin="4" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> <me:OuterItem Name="Regular Folder" IsFinished="True" Exists="True" IsFolder="True"/> <me:OuterItem Name="Regular Item" IsFinished="True" Exists="True"/> <me:OuterItem Name="Yet to be created" IsFinished="False" Exists="False"/> <me:OuterItem Name="Just created" IsFinished="False" Exists="True"/> <me:OuterItem Name="In progress" IsFinished="False" Exists="True" Progress="0.7"/> </ListBox> where OuterItem is: public class OuterItem : IOuterItem { public Guid Id { get; set; } public string Name { get; set; } public bool IsFolder { get; set; } public bool IsFinished { get; set; } public bool Exists { get; set; } public double Progress { get; set; } /// Code below is of lesser importance, but anyway /// #region Visualization helper properties public bool ShowProgress { get { return !IsFinished && Exists; } } public string VisualName { get { return IsFolder ? "[ " + Name + " ]" : Name; } } #endregion public override string ToString() { if (IsFinished) return Name; if (!Exists) return " ??? " + Name; return Progress.ToString("0.000 ") + Name; } public static OuterItem Get(IOuterItem item) { return new OuterItem() { Id = item.Id, Name = item.Name, IsFolder = item.IsFolder, IsFinished = item.IsFinished, Exists = item.Exists, Progress = item.Progress }; } } ?onverters are: /// Are of lesser importance too (for understanding), but will be useful if you copy-paste to get it working public class BooleanToItalicsConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { bool normal = (bool)value; return normal ? FontStyles.Normal : FontStyles.Italic; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } public class BooleanToVisibilityConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { bool exists = (bool)value; return exists ? Visibility.Visible : Visibility.Collapsed; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } But most important, is that UserControl.Loaded() has: lbItems.Items.Clear(); lbItems.ItemsSource = fsItems; where fsItems is ObservableCollection<OuterItem>. The usability problem I describe takes place when I Clear() that collection (fsItems) and fill with new items.

    Read the article

  • jquery mouse events masked by img in IE8 - all other platforms/browsers work fine

    - by Bruce
    Using the jQuery mouse events for mouseenter, mouseleave or hover all work swimmingly on all Mac or Windows browsers except for IE8/Windows. I am using mouseenter and mouseleave to test for hot rectangles (absolutely positioned and dimensioned divs that have no content) over an image used as hotspots to make visible the navigation buttons when different regions of the main enclosing rectangle (image) are touched by the cursor. Windows/IE jQuery never sends notifications (mouseenter our mouseleave) as the cursor enters or exits one of the target divs. If I turn off the visibility of the image everything works as expected (like it does in every other browser), so the image is effectively blocking all messages (the intention was for the image to be a background and all the divs float above it, where they can be clicked on). I understand that there's a z-index gotcha (so explicitly specifying z-index for each absolute positioned div does not work), but unclear as to how to nest or order multiple divs to allow a single set of jQuery rules to support all browsers. The image tag seems to trump all other divs and always appear in front of them... BTW: I could not use i m g as a tag in this text so it is spelled image in the following, so the input editor does not think that I am trying to pull a fast one on stackoverflow... How used? "mainview" is the background image, "zoneleft" and "zoneright" are the active areas where when the cursor enters the nav buttons "leftarrow" and rightarrow" are supposed to appear. Javascript $("div#zoneleft").bind("mouseenter",function () // enters left zone see left arrow { arrowVisibility("left"); document.getElementById("leftarrow").style.display="block"; }).bind("mouseleave",function () { document.getElementById("leftarrow").style.visibility="hidden"; document.getElementById("rightarrow").style.visibility="hidden"; }); HTML <div id="zoneleft" style="position:absolute; top:44px; left:0px; width:355px; height:372px; z-index:40;"> <div id="leftarrow" style="position:absolute; top:158px; left:0px; z-index:50;"><img src="images/newleft.png" width="59" height="56"/></div></div> <div id="zoneright" style="position:absolute; top:44px; left:355px; width:355px; height:372px; z-index:40;"> <div id="rightarrow" style="position:absolute; top:158px; left:296px; z-index:50;"> (tag named changed so that I could include it here) <image src="images/newright.png" width="59" height="56" /></div></div> </div><!-- navbuttons --> <image id="mainview" style="z-index:-1;" src="images/projectPhotos/photo1.jpg" width:710px; height:372px; /> (tag named changed so that I could include it here) </div><!--photo-->

    Read the article

  • jquery drag and drop script and problem in reading json array

    - by Mac Taylor
    i made a script , exactly like wordpress widgets page and u can drag and drop objects this is my jquery script : <script type="text/javascript" >$(function(){ $('.widget') .each(function(){ $(this).hover(function(){ $(this).find('h4').addClass('collapse'); }, function(){ $(this).find('h4').removeClass('collapse'); }) .find('h4').hover(function(){ $(this).find('.in-widget-title').css('visibility', 'visible'); }, function(){ $(this).find('.in-widget-title').css('visibility', 'hidden'); }) .click(function(){ $(this).siblings('.widget-inside').toggle(); //Save state on change of collapse state of panel updateWidgetData(); }) .end() .find('.in-widget-title').css('visibility', 'hidden'); }); $('.column').sortable({ connectWith: '.column', handle: 'h4', cursor: 'move', placeholder: 'placeholder', forcePlaceholderSize: true, opacity: 0.4, start: function(event, ui){ //Firefox, Safari/Chrome fire click event after drag is complete, fix for that if($.browser.mozilla || $.browser.safari) $(ui.item).find('.widget-inside').toggle(); }, stop: function(event, ui){ ui.item.css({'top':'0','left':'0'}); //Opera fix if(!$.browser.mozilla && !$.browser.safari) updateWidgetData(); } }) .disableSelection(); }); function updateWidgetData(){ var items=[]; $('.column').each(function(){ var columnId=$(this).attr('id'); $('.widget', this).each(function(i){ var collapsed=0; if($(this).find('.widget-inside').css('display')=="none") collapsed=1; //Create Item object for current panel var item={ id: $(this).attr('id'), collapsed: collapsed, order : i, column: columnId }; //Push item object into items array items.push(item); }); }); //Assign items array to sortorder JSON variable var sortorder={ items: items }; //Pass sortorder variable to server using ajax to save state $.post("blocks.php"+"&order="+$.toJSON(sortorder), function(data){ $('#console').html(data).fadeIn("slow"); }); } </script> main part is saving object orders in table and this is my php part : function stripslashes_deep($value) { $value = is_array($value) ? array_map('stripslashes_deep', $value) : stripslashes($value); return $value; } $order = $_GET['order']; $order = sql_quote($order); if(empty($order)){ echo "Invalid data"; exit; } global $db,$prefix; if (get_magic_quotes_gpc()) { $_POST = array_map('stripslashes_deep', $_POST); $_GET = array_map('stripslashes_deep', $_GET); $_COOKIE = array_map('stripslashes_deep', $_COOKIE); $_REQUEST = array_map('stripslashes_deep', $_REQUEST); } $data=json_decode($order); foreach($newdata->items as $item) { //Extract column number for panel $col_id=preg_replace('/[^\d\s]/', '', $item->column); //Extract id of the panel $widget_id=preg_replace('/[^\d\s]/', '', $item->id); $sql="UPDATE blocks_tbl SET bposition='$col_id', weight='".$item->order."' WHERE id='".$widget_id."'"; mysql_query($sql) or die('Error updating widget DB'); } print_r($order); now forexample the output is this : items\":[{\"id\":\"item26\",\"collapsed\":1,\"order\":0,\"column\":\"c\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":1,\"column\":\"i\"},{\"id\":\"item1\",\"collapsed\":1,\"order\":2,\"column\":\"i\"},{\"id\":\"item3\",\"collapsed\":1,\"order\":3,\"column\":\"i\"},{\"id\":\"item16\",\"collapsed\":1,\"order\":4,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":5,\"column\":\"i\"},{\"id\":\"item6\",\"collapsed\":1,\"order\":6,\"column\":\"i\"},{\"id\":\"item17\",\"collapsed\":1,\"order\":7,\"column\":\"i\"},{\"id\":\"item19\",\"collapsed\":1,\"order\":8,\"column\":\"i\"},{\"id\":\"item10\",\"collapsed\":1,\"order\":9,\"column\":\"i\"},{\"id\":\"item11\",\"collapsed\":1,\"order\":10,\"column\":\"i\"},{\"id\":\"item0\",\"collapsed\":1,\"order\":0,\"column\":\"l\"},{\"id\":\"item5\",\"collapsed\":1,\"order\":1,\"column\":\"l\"},{\"id\":\"item8\",\"collapsed\":1,\"order\":2,\"column\":\"l\"},{\"id\":\"item13\",\"collapsed\":1,\"order\":3,\"column\":\"l\"},{\"id\":\"item21\",\"collapsed\":1,\"order\":4,\"column\":\"l\"},{\"id\":\"item28\",\"collapsed\":1,\"order\":5,\"column\":\"l\"},{\"id\":\"item7\",\"collapsed\":1,\"order\":0,\"column\":\"r\"},{\"id\":\"item20\",\"collapsed\":1,\"order\":1,\"column\":\"r\"},{\"id\":\"item15\",\"collapsed\":1,\"order\":2,\"column\":\"r\"},{\"id\":\"item18\",\"collapsed\":1,\"order\":3,\"column\":\"r\"},{\"id\":\"item14\",\"collapsed\":1,\"order\":4,\"column\":\"r\"}]} question is how can i find out column_id or order im a little bit confused

    Read the article

  • IUSR vs. Application Pool credentials

    - by jlew
    I have a IIS7/ASP.NET application running with the following configuration: Anonymous authentication (IUSR). Application Pool running as a domain account If IUSR is denied the "logon locally", then it appears that ASPX pages will still render their HTML, but static content such as images will not be delivered. I'm wondering what the technical reason is for this? If IUSR is "broken", why will a request to an ASPX page be passed down the pipeline and executed, but IIS will refuse to serve an image in the same directory?

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by rbeier
    Hi, We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Any new Sony RAID laptops coming after March 2010?

    - by Simon
    My current Sony laptop has RAID 0 (don't worry I back it up 100% every day). I have 2 × 320gb drives giving me 640gb. I want my next laptop to be RAID but with SSD + 500GB drive Unfortunately Sony's current line seems to have abandoned two disk configuations and I really don't want to go to a slower system than I have now. Anybody know if Sony has any RAID laptops (16" or larger) in the pipeline - or do I have to switch to DELL or something else

    Read the article

  • Apache mod_proxy vs mod_rewrite

    - by Scott
    What is the difference between using mod_proxy and mod_rewrite? I have a requirement to send certain url patterns through the tomcat, which runs on the same host but under port 8080. I know this is something for mod_proxy, but I"m wondering why I can't just use mod_rewrite, or what the difference is? Probably has to do w/ reverse proxy, and also when in the pipeline it gets handled? Thanks.

    Read the article

  • Is it important to reboot Linux after a kernel update?

    - by lfaraone
    I have a few production Fedora and Debian webservers that host our sites as well as user shell accounts (used for git vcs work, some screen+irssi sessions, etc). Occasionally a new kernel update will come down the pipeline in yum/apt-get, and I was wondering if most of the fixes are severe enough to warrant a reboot, or if I can apply the fixes sans reboot. Our main development server currently has 213 days of uptime, and I wasn't sure if it was insecure to run such an older kernel.

    Read the article

  • What is the best IIS tracing tool you have used?

    - by Vivek
    I have spent majority of my career using and troubleshooting IIS Web Server. According to me the best thing happened to a Web admin is FRT (Failed Request Tracing) in IIS 7.0.I have used Event Tracing for Windows as well and FRT is as much helpful.Is there any such tracing tool which can give such good in-depth and greater understanding on request flow through the pipeline?

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >