Search Results

Search found 91599 results on 3664 pages for 'user manual'.

Page 486/3664 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • best way to avoid sql injection

    - by aauser
    I got similar domain model 1) User. Every user got many cities. @OneToMany(targetEntity=adv.domain.City.class...) 2) City. Every city got many districts @OneToMany(targetEntity=adv.domain.Distinct.class) 3) Distintc My goal is to delete distinct when user press delete button in browser. After that controller get id of distinct and pass it to bussiness layer. Where method DistinctService.deleteDistinct(Long distinctId) should delegate deliting to DAO layer. So my question is where to put security restrictions and what is the best way to accomplish it. I want to be sure that i delete distinct of the real user, that is the real owner of city, and city is the real owner of distinct. So nobody exept the owner can't delete ditinct using simple url like localhost/deleteDistinct/5. I can get user from httpSession in my controller and pass it to bussiness layer. After that i can get all cities of this user and itrate over them to be sure, that of the citie.id == distinct.city_id and then delete distinct. But it's rather ridiculous in my opinion. Also i can write sql query like this ... delete from t_distinct where t_distinct.city_id in (select t_city.id from t_city left join t_user on t_user.id = t_city.owner_id where t_user.id = ?) and t_distinct.id = ? So what is the best practice to add restrictions like this. I'm using Hibernate, Spring, Spring MVC by the way.. Thank you

    Read the article

  • C++: Help with cin difference between Linux and Windows

    - by Krashman5k
    I have a Win32 console program that I wrote and it works fine. The program takes input from the user and performs some calculations and displays the output - standard stuff. For fun, I am trying to get the program to work on my Fedora box but I am running into an issue with clearing cin when the user inputs something that does not match my variable type. Here is the code in question: void CParameter::setPrincipal() { double principal = 0.0; cout << endl << "Please enter the loan principal: "; cin >> principal; while(principal <= 0) { if (cin.fail()) { cin.clear(); cin.ignore(INT_MAX, '\n'); } else { cout << endl << "Plese enter a number greater than zero. Please try again." << endl; cin >> principal; } } m_Parameter = principal; } This code works in Windows. For example, if the user tries to enter a char data type (versus double) then the program informs the user of the error, resets cin, and allows the user another opportunity to enter a valid value. When I move this code to Fedora, it compiles fine. When I run the program and enter an invalid data type, the while loop never breaks to allow the user to change the input. My questions are; how do I clear cin when invalid data is inputted in the Fedora environment? Also, how should I write this code so it will work in both environments (Windows & Linux)? Thanks in advance for your help!

    Read the article

  • Skip subdirectory in python import

    - by jstaab
    Ok, so I'm trying to change this: app/ - lib.py - models.py - blah.py Into this: app/ - __init__.py - lib.py - models/ - __init__.py - user.py - account.py - banana.py - blah.py And still be able to import my models using from app.models import User rather than having to change it to from app.models.user import User all over the place. Basically, I want everything to treat the package as a single module, but be able to navigate the code in separate files for development ease. The reason I can't do something like add for file in __all__: from file import * into init.py is I have circular references between the model files. A fix I don't want is to import those models from within the functions that use them. But that's super ugly. Let me give you an example: user.py ... from app.models import Banana ... banana.py ... from app.models import User ... I wrote a quick pre-processing script that grabs all the files, re-writes them to put imports at the top, and puts it into models.py, but that's hardly an improvement, since now my stack traces don't show the line number I actually need to change. Any ideas? I always though init was probably magical but now that I dig into it, I can't find anything that lets me provide myself this really simple convenience.

    Read the article

  • What is a good solution to log the deletion of a row in MySQL?

    - by hobodave
    Background I am currently logging deletion of rows from my tickets table at the application level. When a user deletes a ticket the following SQL is executed: INSERT INTO alert_log (user_id, priority, priorityName, timestamp, message) VALUES (9, 4, 'WARN', NOW(), "TICKET: David A. deleted ticket #6 from Foo"); Please do not offer schema suggestions for the alert_log table. Fields: user_id - User id of the logged in user performing the deletion priority - Always 4 priorityName - Always 'WARN' timestamp - Always NOW() message - Format: "[NAMESPACE]: [FullName] deleted ticket #[TicketId] from [CompanyName]" NAMESPACE - Always TICKET FullName - Full name of user identified by user_id above TicketId - Primary key ID of the ticket being deleted CompanyName - Ticket has a Company via tickets.company_id Situation/Questions Obviously this solution does not work if a ticket is deleted manually from the mysql command line client. However, now I need to. The issues I'm having are as follows: Should I use a PROCEDURE, FUNCTION, or TRIGGER? -- Analysis: TRIGGER - I don't think this will work because I can't pass parameters to it, and it would trigger when my application deleted the row too. PROCEDURE or FUNCTION - Not sure. Should I return the number of deleted rows? If so, that would require a FUNCTION right? How should I account for the absence of a logged in user? -- Possibilities: Using either a PROC or FUNC, require the invoker to pass in a valid user_id Require the user to pass in a string with the name Use the CURRENT_USER - meh Hard code the FullName to just be "Database Administrator" Could the name be an optional parameter? I'm rather green when it comes to sprocs. Assuming I went with the PROC/FUNC approach, is it possible to outright restrict regular DELETE calls to this table, yet still allow users to call this PROC/FUNC to do the deletion for them? Ideally the solution is usable by my application as well, so that my code is DRY.

    Read the article

  • How to use the values from session variables in jsp pages that got saved using @Scope("session") in the mvc controllers

    - by droidsites
    Doing a web site using spring mvc. I added a SignupController to handle all the sign up related requests. Once user signup I am adding that to a session using @Scope("session"). Below is the SignupController code, SignupController.java @Controller @Scope("session") public class SignupController { @Autowired SignupServiceInter signUpService; private static final Logger logger = Logger.getLogger(SignupController.class); private String sessionUser; @RequestMapping("/SignupService") public ModelAndView signUp(@RequestParam("userid") String userId, @RequestParam("password") String password,@RequestParam("mailid") String emailId){ logger.debug(" userId:"+userId+"::Password::"+password+"::"); String signupResult; try { signUpService.registerUser(userId, password,emailId); sessionUser = userId; //adding the sign up user to the session return new ModelAndView("userHomePage","loginResult","Success"); //Navigate to user Home page if everything goes right } catch (UserExistsException e) { signupResult = e.toString(); return new ModelAndView("signUp","loginResult", signupResult); //Navigate to signUp page back if user does not exist } } } I am using "sessionUser" variable to store the signed up User Id. My understanding is that when I use @Scope("session") for the controller all the instance variables will added to HttpSession. So by that understanding I tried to access this "SessionUser" in userHomePage.jsp as, userHomepage.jsp Welcome to <%=session.getAttribute("sessionUser")%> But it throws null. So my question is how to use the values from session variables in jsp pages that got saved using @Scope("session") in the mvc controllers. Note: My work around is that pass that signed User Id to jsp page through ModelAndView, but it seems passing the value like these among the pages takes me back to managing state among pages using QueryStrings days.

    Read the article

  • rails route question

    - by badnaam
    I am trying to build a search functionality which at a high level works like this. 1 - I have a Search model, controller with a search_set action and search views/partial to render the search. 2 - At the home page a serach form is loaded with an empty search object or a search object initialized with session[:search] (which contains user search preferences, zip code, proximity, sort order, per page etc). This form has a post(:put) action to search_set. 3 - When a registered user performs a set the params of the search form are collected and a search record is saved against that user. If a unregistered user performs a search then the search set action simply stores the params in the session[:search]. In either case, the search is executed with the given params and the results are displayed. At this point the url of in the location bar is something like.. http://localhost:3000/searches/search_set?stype=1 At this point if the user simply hits enter on the location bar, I get an error that says "No action responded to show" I am guessing because the URL contains search_set which uses a put method and even though I have a search_show (:get) action (which simply reruns the search in the session or saved in the database) does not get called. How can I handle this situation where I can route a user hitting enter into the location bar to a get method? If this does not explain the problem , please let me know I can share more details/code etc. Thanks!

    Read the article

  • Why are some classes created on the fly and others aren't in CakePHP 1.2.7?

    - by JoseMarmolejos
    I have the following model classes: class User extends AppModel { var $name= 'User'; var $belongsTo=array('SellerType' => array('className' => 'SellerType'), 'State' => array('className' => 'State'), 'Country' => array('className' => 'Country'), 'AdvertMethod' => array('className' => 'AdvertMethod'), 'UserType' => array('className' => 'UserType')); var $hasMany = array('UserQuery' => array('className' => 'UserQuery'));} And: class UserQuery extends AppModel { var $name = 'UserQuery'; var $belongsTo = array('User', 'ResidenceType', 'HomeType');} Everything works fine with the user class and all its associations, but the UserQuery class is being completely ignored by the orm (table name user_queries and the generated queries do cast it as UserQuery. Another weird thing is that if I delete the code inside the User class I get an error, but if I do the same for the UserQuery class I get no errors. So my question is why does cakephp generate a class on the fly for the UserQuery and ignores my class, and why doesn't it generate a class on the fly for the User as well ?

    Read the article

  • How to map this class in NHibernate (not FluentNHibernate)?

    - by JMSA
    Suppose I have a database like this: This is set up to give role-wise menu permissions. Please note that, User-table has no direct relationship with Permission-table. Then how should I map this class against the database-tables? class User { public int ID { get; set; } public string Name { get; set; } public string Username { get; set; } public string Password { get; set; } public bool? IsActive { get; set; } public IList<Role> RoleItems { get; set; } public IList<Permission> PermissionItems { get; set; } public IList<string> MenuItemKeys { get; set; } } This means, (1) Every user has some Roles. (2) Every user has some Permissions (depending on to Roles). (3) Every user has some permitted MenuItemKeys (according to Permissions). How should my User.hbm.xml look like?

    Read the article

  • Rails active record association problem

    - by Harm de Wit
    Hello, I'm new at active record association in rails so i don't know how to solve the following problem: I have a tables called 'meetings' and 'users'. I have correctly associated these two together by making a table 'participants' and set the following association statements: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants and class Participant < ActiveRecord::Base belongs_to :meeting belongs_to :user and the last model class User < ActiveRecord::Base has_many :participants, :dependent => :destroy At this point all is going well and i can access the user values of attending participants of a specific meeting by calling @meeting.users in the normal meetingshow.html.erb view. Now i want to make connections between these participants. Therefore i made a model called 'connections' and created the columns of 'meeting_id', 'user_id' and 'connected_user_id'. So these connections are kinda like friendships within a certain meeting. My question is: How can i set the model associations so i can easily control these connections? I would like to see a solution where i could use @meeting.users.each do |user| user.connections.each do |c| <do something> end end I tried this by changing the model of meetings to this: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants has_many :connections, :dependent => :destroy has_many :participating_user_connections, :through => :connections, :source => :user Please, does anyone have a solution/tip how to solve this the rails way?

    Read the article

  • How to convert object to string list?

    - by user1381501
    I want to get two values by using linq select query and try to convert object to string list. I am trying to convert list to list. The code as below. I got the error when I convert object to string list : return returnvalue = (List)userlist; public List<string> GetUserList(string username) { List<User> UserList = new List<User>(); List<string> returnvalue=new List<string>(); try { string returnstring = string.Empty; DataTable dt = null; dt = Library.Helper.FindUser(username, 200); foreach (DataRow dr in dt.Rows) { User user = new User(); spuser.id = dr["ID"].ToString(); spuser.name = dr["Name"].ToString(); UserList.Adduser } } catch (Exception ex) { } List<SharePointMentoinUser> userlist = UserList.Select(a => new User { name = (string)a.name, id = (string)a.id }).ToList(); **return returnvalue = (List<string>)userlist;** }

    Read the article

  • How to terminate a request in JSP (not the "return;")

    - by Genom
    I am programming a website with JSP. There are pages where user must be logged in to see it. If they are not logged in, they should see a login form. I have seen in a php code that you can make a .jsp page (single file), which checkes, whether the user is logged in or not. If not it will show the login form. If the user is logged in, nothing will be done. So in order to do that I use this structure in my JSPs: Headers, menus, etc. etc... normal stuff which would be shown such as body, footer to a logged in user. This structure is very easy to apply to all webpages. So I don't have to apply checking algorithm to each webpage! I can simply add this "" and the page is secure! So my problem is that if the user is not logged in, then only the log in form should be shown and the footer. So code should bypass the body. Therefore structured my checklogin.jsp so: If user is not logged in show the login form and footer and terminate request. The problem is that I don't know how to terminate the request... If I use "return;" then only the checklogin.jsp stops but server continues to process parent page! Therefore page has 2 footers! (1 from parent page and 1 from checklogin.jsp). How can I avoid this? (There is exit(); in php for this by the way!) Thanks for any suggestions!

    Read the article

  • WIN32 API question - Looking for answer asap

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. I need either of the solution 1) Is there any way to stop the folder generation. 2) What parameter I need to pass this to work? Thanks

    Read the article

  • login script in php using session variables

    - by kracekumar
    ? username: '; } else { $user=$_POST['user']; $query="select name from login where name='$user'"; $result=mysql_query($query) or die(mysql_error()); $rows=mysql_num_rows($result)or die(mysql_error()); session_start(); //echo "results"; $username=mysql_fetch_array($result) or die(mysql_error()); $fields=mysql_num_fields($result) or die(mysql_error()); echo "username:$username[0]"; $_SESSION['username']=$username[0]; $sesion_username=$_SESSION['username']; //echo "rows:$rows"." "."fields:$fields"; echo""."username:$sesion_username"; } ? i wanted to create a site where user can login and upload their details depending on their user name have stored their details in a database,the problem i am facing is two people 'a' and 'b' login one after other session_variable $_SESSION['username'] will have 'b's' username ,so i can't display 'a's' details now. . . . what i want to achieve is when a user logins in i want to store his name and other details in session variable and navigate him according and display his details

    Read the article

  • Access database Need to prevent from approving overlapping OT.Second Try with modified request Not a programmer [on hold]

    - by user2512764
    Employees Signups on company Website for advance overtime line. Access table already has overtime signups which does not require user to add the time but it requires only to add location as approved. Since this table has field Employee name, Date, start time and End time and location, All the fields has the data except for location. In the data base I have created a form based on this table. Since the table already have most of the information User only has to add location in the form field in order to approve overtime. Once user approves an overtime line for example: User approves overtime for employee name 'John' which starts on 7/1/2013 at 0400-0800, location is successfully added. When user tries to add location for John again which might has the start time for 7/1/2013 at 0600=0900. Again we are not entering Start time, End time and date it is already in the table. we are only entering location as approval. Soon user enters the location for John in the form field, since there is a conflict with previously overtime line which has already been approved. program needs to check employee name, date and time in previously approved (Added location) overtime line and The location in current record needs to be deleted and go to next record. I hope I have explained it in understandable format. Thank You,

    Read the article

  • jQuery validation plugin: valid() does not work with remote validation ?

    - by a-dilla
    I got started by following this awesome tutorial, but wanted to do the validation on on keyup and place my errors somewhere else. The remote validation shows its own error message at the appropriate times, making me think I had it working. But if I ask specifically if a field with remote validation is valid, it says no, actually, its not. In application.js I have this... $("#new_user").validate({ rules: { "user[login]": {required: true, minlength: 3, remote: "/live_validations/check_login"}, }, messages: { "user[login]": {required: " ", minlength: " ", remote: " "}, } }); $("#user_login").keyup(function(){ if($(this).valid()){ $(this).siblings(".feedback").html("0"); }else{ $(this).siblings(".feedback").html("1"); } }) And then this in the rails app... def check_login @user = User.find_by_login(params[:user][:login]) respond_to do |format| format.json { render :json => @user ? "false" : "true" } end end I think that my problem might have everything to do with this ticket over at jQuery, and tried to implement that code, but, being new to jQuery, it's all a bit over my head. When I say bit, I mean way way. Any ideas to fix it, or a new way to look at it, would be a big help.

    Read the article

  • Invalid SSH key error in juju when using it with MAAS

    - by Captain T
    This is the output of juju from a clean install with 2 nodes all running 12.04 juju bootstrap - finishes with no errors and allocates the machine to the user but still no joy after juju environment-destroy and rebuild with different users and different nodes. root@cloudcontrol:/storage# juju -v status 2012-06-07 11:19:47,602 DEBUG Initializing juju status runtime 2012-06-07 11:19:47,621 INFO Connecting to environment... 2012-06-07 11:19:47,905 DEBUG Connecting to environment using node-386077143930... 2012-06-07 11:19:47,906 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="node-386077143930" remote_port="2181" local_port="57004". The authenticity of host 'node-386077143930 (10.5.5.113)' can't be established. ECDSA key fingerprint is 31:94:89:62:69:83:24:23:5f:02:70:53:93:54:b1:c5. Are you sure you want to continue connecting (yes/no)? yes 2012-06-07 11:19:52,102 ERROR Invalid SSH key 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@662: Client environment:host.name=cloudcontrol 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-23-generic 2012-06-07 11:19:52,426:18541(0x7feb13b58700):ZOO_INFO@log_env@671: Client environment:os.version=#36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@679: Client environment:user.name=sysadmin 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@687: Client environment:user.home=/root 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@log_env@699: Client environment:user.dir=/storage 2012-06-07 11:19:52,428:18541(0x7feb13b58700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:57004 sessionTimeout=10000 watcher=0x7feb11afc6b0 sessionId=0 sessionPasswd=<null> context=0x2dc7d20 flags=0 2012-06-07 11:19:52,429:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client 2012-06-07 11:19:55,765:18541(0x7feb0e856700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:57004] zk retcode=-4, errno=111(Connection refused): server refused to accept the client I have tried numerous ways of creating the keys with ssh-keygen -t rsa -b 2048, ssh-keygen -t rsa, ssh-keygen, and i have tried adding those to MAAS web config page, but always get the same result. I have added the appropriate public key afterwards to the ~/.ssh/authorized_keys I can also ssh to the node, but as I have not been asked to give it a user name or password or set up any sort of account, I cannot manually ssh into the node. The setup of the node is all handled by maas server. It seems like a simple error of looking at the wrong key or looking in the wrong places, only other suggestions I can find are to destroy the environment and rebuild (but that didn't work umpteen times now) or leave it to build the instance once the node has powered up, but I have left for a few hours, and left overnight to build with no luck.

    Read the article

  • SharePoint 2010 Hosting :: Hiding SharePoint 2010 Ribbon From Anonymous Users

    - by mbridge
    The user interface improvements in SharePoint 2010 as a whole are truly amazing. Microsoft has brought this already impressive product leaps and bounds in terms of accessibility, standards, and usability. One thing you might be aware of is the new and quite useful “ribbon” control that appears by default at the top of every SharePoint 2010 master page. Here’s a sneak peek: You’ll see this ribbon not only in the 2010 web interface, but also throughout the entire family of Office products coming out this year. Even SharePoint Designer 2010 makes use of the ribbon in a very flexible and useful way. Hiding The Ribbon In SharePoint 2010, the ribbon is used almost exclusively for content creation and site administration. It doesn’t make much sense to show the ribbon on a public-facing internet site (in fact, it can really retract from your site’s design when it appears), so you’ll probably want to hide the ribbon when users aren’t logged in. Here’s how it works: <SharePoint:SPSecurityTrimmedControl PermissionsString="ManagePermissions" runat="server">     <div id="s4-ribbonrow" class="s4-pr s4-ribbonrowhidetitle">         <!-- Ribbon code appears here... -->     </div> </SharePoint:SPSecurityTrimmedControl> In your master page, find the SharePoint ribbon by looking for the line of code that begins with <div id=”s4-ribbonrow”>. Place the SPSecurityTrimmedControl code around your ribbon to conditionally hide it based on user permissions. In our example, we’ve hidden the ribbon from any user who doesn’t have the ManagePermissions ability, which is going to be almost any user short of a site administrator. Other Permission Levels You can specify different permission levels for the SPSecurityTrimmedControl, allowing you to configure exactly who can see the SharePoint 2010 ribbon. Basically, this control will hide anything inside of it when users don’t have the specified PermissionString. The available options include: 1. List Permissions - ManageLists - CancelCheckout - AddListItems - EditListItems - DeleteListItems - ViewListItems - ApproveItems - OpenItems - ViewVersionsDeleteVersions - CreateAlerts - ViewFormPages 2. Site Permissions - ManagePermissions - ViewUsageData - ManageSubwebs - ManageWeb - AddAndCustomizePages - ApplyThemeAndBorder - ApplyStyleSheets - CreateGroups - BrowseDirectories - CreateSSCSite - ViewPages - EnumeratePermissions - BrowseUserInfo - ManageAlerts - UseRemoteAPIs - UseClientIntegration - Open - EditMyUserInfo 3. Personal Permissions - ManagePersonalViews - AddDelPrivateWebParts - UpdatePersonalWebParts You can use this control to hide anything in your master page or on related page layouts, so be sure to keep it in mind when you’re trying to hide/show things conditionally based on user permission. The One Catch You may notice that the login control (or welcome control) is actually inside the ribbon by default in SharePoint 2010. You’ll probably want to pull this control out of the ribbon and place it elsewhere on your page. Just look for the line of code that looks like this: <wssuc:Welcome id="IdWelcome" runat="server" EnableViewState=”false”/> Move this code out of the ribbon and into another location within your master page. Save your changes, check in and approve all files, and anonymous users will never know your site is built on SharePoint 2010!

    Read the article

  • Computer Visionaries 2014 Kinect Hackathon

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/08/computer-visionaries-2014-kinect-hackathon.aspxA big thank you to Computer Vision Dallas and Microsoft for putting together the Computer Visionaries 2014 Kinect Hackathon that took place July 18th and 19th 2014.  Our team had a great time and learned a lot from the Kinect MVP's and Microsoft team.  The Dallas Entrepreneur Center was a fantastic venue. In total, 114 people showed up to form 15 teams. Burger ITS & Friends team members with Ben Lower:  Shawn Weisfeld, Teresa Burger, Robert Burger, Harold Pulcher, Taylor Woolley, Cori Drew (not pictured), and Katlyn Drew (not pictured) We arrived Friday after a long day of work/driving.  Originally, our idea was to make a learning game for kids.  It was intended to be multi-simultaneous players dragging and dropping tiles into a canvas area for kids around 5 years old. We quickly learned that we were limited to two simultaneous players. After working on the game for the rest of the evening and into the next morning we decided that a fast multi-player game with hand gestures was not going to happen without going beyond what was provided with the API. If we were going to have something to show, it was time to switch gears. The next idea on the table was the Photo Anywhere Kiosk. The user can use voice and hand gestures to pick a place they would like to be.  After the user says a place (or anything they want) and then the word "search", the app uses Bing to display a bunch of images for him/her to choose from. With the use of hand gesture (grab and slide to move back and forth and push/pull to select an image) the user can get the perfect image to pose with. I couldn't get a snippet with the hand but when a the app is in use, a hand shows up to cue the user to use their hand to control it's movement. Once they chose an image, we use the Kinect background removal feature to super impose the user on that image. When they are in the perfect position, they say "save" to save the image. Currently, the image is saved in the images folder on the users account but there are many possibilities such as emailing it, posting to social media, etc.. The competition was great and we were honored to be recognized for third place. Other related posts: http://jasongfox.com/computer-visionaries-2014-incredible-success/ A couple of us are continuing to work on the kid's game and are going to make it a Windows 8 multi-player game without Kinect functionality. Stay tuned for more updates.

    Read the article

  • Single Instance of Child Forms in MDI Applications

    - by Akshay Deep Lamba
    In MDI application we can have multiple forms and can work with multiple forms i.e. MDI childs at a time but while developing applications we don't pay attention to the minute details of memory management. Take this as an example, when we develop application say preferably an MDI application, we have multiple child forms inside one parent form. On MDI parent form we would like to have menu strip and tab strip which in turn calls other forms which build the other parts of the application. This also makes our application looks pretty and eye-catching (not much actually). Now on a first go when a user clicks a menu item or a button on a tab strip an application initialize a new instance of a form and shows it to the user inside the MDI parent, if a user again clicks the same button the application creates another new instance for the form and presents it to the user, this will result in the un-necessary usage of the memory. Therefore, if you wish to have your application to prevent generating new instances of the forms then use the below method which will first check if the the form is visible among the list of all the child forms and then compare their types, if the form types matches with the form we are trying to initialize then the form will get activated or we can say it will be bring to front else it will be initialize and set visible to the user in the MDI parent window. The method we are using: private bool CheckForDuplicateForm(Form newForm) { bool bValue = false; foreach (Form frm in this.MdiChildren) { if (frm.GetType() == newForm.GetType()) { frm.Activate(); bValue = true; } } return bValue; } Usage: First we need to initialize the form using the NEW keyword ReportForm ReportForm = new ReportForm(); We can now check if there is another form present in the MDI parent. Here, we will use the above method to check the presence of the form and set the result in a bool variable as our function return bool value. bool frmPresent = CheckForDuplicateForm(Reportfrm); Once the above check is done then depending on the value received from the method we can set our form. if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); } In the end this is the code you will have at you menu item or tab strip click: ReportForm Reportfrm = new ReportForm(); bool frmPresent = CheckForDuplicateForm(Reportfrm); if (frmPresent) return; else if (!frmPresent) { Reportfrm.MdiParent = this; Reportfrm.Show(); }

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Custom session: Window does not capture full screen area by default. 12.04

    - by juzerali
    I am trying to create a custom session by creating a custom.desktop file in /usr/share/xesessions folder. Remember this is not a gnome or some other session. I have created my own application for this session, which are simple. Case 1 Chrome Browser Contents of custom.desktop file [Desktop Entry] Name=Internet Kiosk Comment=This is an internet kiosk Exec=google-chrome --kiosk TryExec= Icon= Type=Application Issue Chrome browser starts in kiosk mode but does not capture complete screen area. Some area is left at the bottom and right side of the screen. Case 2 Custom pyGTK app (Quickly) Contents of custom.desktop file [Desktop Entry] Name=Custom Kiosk Comment=This is a custom kiosk Exec=~/MyCustomPyGTKApp TryExec= Icon= Type=Application Issue My custom pyGTK app has window.fullScreen() in the code. That means it should open in full screen without the window chrome (and it does under the normal session). But that too, leaves lots of space around it. Need Help Can anyone tell me whats going on here. I think its some issue with borders as pointed out at http://www.instructables.com/id/Setting-Up-Ubuntu-as-a-Kiosk-Web-Appliance/?ALLSTEPS in Step 8 If by chance, Google Chromium is not stretched to the edges with the --kiosk switch enabled there is a simple fix. To stretch Chromium simply log in as your regular user and edit chromeKiosk.sh to not have the --kiosk switch. Then log in as the restricted user, click the wrench and choose options. Then on the Personal Stuff tab select Hide system title bar and use compact borders. Close the options screen and stretch Chromium to fit the monitor. Then go back into the options window and set it to Use system title bar and borders. After this is done, log out of your restricted user (might need to just reboot) and log into your regular user. Edit chromeKiosk.sh back to include the --kiosk switch again and Chromium should be full screen next time you log into the restricted user. If I were to use a custom pyGTK or a gtkmm app, how should I get around this issue. window.fullScreen() should occupy the complete screen area. This has to be done programmatically or in some other way that can scale. I have to deploy this on large number of machines located at different geographical areas. Doing it manually on every machine is not possible.

    Read the article

  • Unable to add users to Microsoft Dynamics CRM 4.0 after database restore

    - by Wes Weeks
    Working with a client in our Multi-tenant CRM environment who was doing a database migration into CRM and as part of the process, a backup of their Organization_MSCRM database was taken just prior to starting the migration in case it needed to be restored and run a second time. In this case it did, so I restored the database and let the client know he should be good to go.  A few hours later I received a call that they were unable to add some new users, they would appear as available when using the add multiple user wizard, but anyone added would not be added to CRM.  It was also disucussed that these users had been added to CRM initally AFTER the database backup had been taken. I turned on tracing and tried to add the users through both the single user form and multiple user interface and was unable to do so.  The error message in the logs wasn't much help: Unexpected error adding user [email protected]: Microsoft.Crm.CrmException: INVALID_WRPC_TOKEN: Validate WRPC Token: WRPCTokenState=Invalid, TOKEN_EXPIRY=4320, IGNORE_TOKEN=False Searching on Google or bing didn't offer any assitance.  Apparently not a very common problem, or no one has been able to resolve. I did some searching in the MSCRM_CONFIG database and found that their are several user tables there and after getting my head around the structure found that there were enties here for users that were not part of the restored DB.  It seems that new users are added to both the Orgnaization_MSCRM and MSCRM_CONFIG and after the restore these were out of sync. I needed to remove the extra entries in order to address.  Restoring the MSCRM_CONFIG database was not an option as other clients could have been adding users at this point and to restore would risk breaking their instances of CRM.  Long story short, I was finally able to generate a script to remove the bad entries and when I tried to add users again, I was succesful.  In case someone else out there finds themselves in a similar situation, here is the script I used to delete the bad entries. DECLARE @UsersToDelete TABLE (   UserId uniqueidentifier )   Insert Into @UsersToDelete(UserId) Select UserId from [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where CrmuserId Not in (select systemuserid from Organization_MSCRM.dbo.SystemUserBase) And OrganizationId = '00000000-643F-E011-0000-0050568572A1' --Id From the Organization table for this instance   Delete From [MSCRM_CONFIG].[dbo].[SystemUserAuthentication]   Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUser] Where Id in (Select UserId From @UsersToDelete)

    Read the article

  • JWT Token Security with Fusion Sales Cloud

    - by asantaga
    When integrating SalesCloud with a 3rd party application you often need to pass the users identity to the 3rd party application so that  The 3rd party application knows who the user is The 3rd party application needs to be able to do WebService callbacks to Sales Cloud as that user.  Until recently without using SAML, this wasn't easily possible and one workaround was to pass the username, potentially even the password, from Sales Cloud to the 3rd party application using URL parameters.. With Oracle Fusion R8 we now have a proper solution and that is called "JWT Token support". This is based on the industry JSON Web Token standard , for more information see here JWT Works by allowing the user the ability to generate a token (lasts a short period of time) for a specific application. This token is then passed to the 3rd party application as a GET parameter.  The 3rd party application can then call into SalesCloud and use this token for all webservice calls, the calls will be executed as the user who generated the token in the first place, or they can call a special HR WebService (UserService-findSelfUserDetails() ) with the token and Fusion will respond with the users details. Some more details  The following will go through the scenario that you want to embed a 3rd party application within a WebContent frame (iFrame) within the opportunity screen.  1. Define your application using the topology manager in setup and maintenance  See this documentation link on topology manager 2. From within your groovy script which defines the iFrame you wish to embed, write some code which looks like this : def thirdpartyapplicationurl = oracle.topologyManager.client.deployedInfo.DeployedInfoProvider.getEndPoint("My3rdPartyApplication" )def crmkey= (new oracle.apps.fnd.applcore.common.SecuredTokenBean().getTrustToken())def url = thirdpartyapplicationurl +"param1="+OptyId+"&jwt ="+crmkeyreturn (url)  This snippet generates a URL which contains The Hostname/endpoint of the 3rd party application Two Parameters The opportunityId stored in parameter "param1" The JWT Token store in  parameter "jwt" 3. From your 3rd Party Application you now have two options Execute a webservice call by first setting the header parameter "Authentication" to the JWT token. The webservice call will be executed against Fusion Applications "As" the user who execute the process To find out "Who you are" , set the header parameter to "Authentication" and execute the special webservice call findSelfUserDetails(), in the UserDetailsService For more information  Oracle Sales Cloud Documentation , specific chapter on JWT Token OTN samples, specifically the Rich UI With JWT Token Sample Oracle Fusion Applications General Documentation

    Read the article

  • Extensible Metadata in Oracle IRM 11g

    - by martin.abrahams
    Another significant change in Oracle IRM 11g is that we now use XML to create the tamperproof header for each sealed document. This article explains what this means, and what benefit it offers. So, every sealed file has a metadata header that contains information about the document - its classification, its format, the user who sealed it, the name and URL of the IRM Server, and much more. The IRM Desktop and other IRM applications use this information to formulate the request for rights, as well as to enhance the user experience by exposing some of the metadata in the user interface. For example, in Windows explorer you can see some metadata exposed as properties of a sealed file and in the mouse-over tooltip. The following image shows 10g and 11g metadata side by side. As you can see, the 11g metadata is written as XML as opposed to the simple delimited text format used in 10g. So why does this matter? The key benefit of using XML is that it creates the opportunity for sealing applications to use custom metadata. This in turn creates the opportunity for custom classification models to be defined and enforced. Out of the box, the solution uses the context classification model, in which two particular pieces of metadata form the basis of rights evaluation - the context name and the document's item code. But a custom sealing application could use some other model entirely, enabling rights decisions to be evaluated on some other basis. The integration with Oracle Beehive is a great example of this. When a user adds a document to a Beehive workspace, that document can be automatically sealed with metadata that represents the Beehive security model rather than the context model. As a consequence, IRM can enforce the Beehive security model precisely and all rights configuration can actually be managed through the Beehive UI rather than the IRM UI. In this scenario, IRM simply supports the Beehive application, seamlessly extending Beehive security to all copies of workspace documents without any additional administration. Finally, I mentioned that the metadata header is tamperproof. This is obviously to stop a rogue user modifying the metadata with a view to gaining unauthorised access - reclassifying a board document to a less sensitive classifcation, for example. To prevent this, the header is digitally signed and can only be manipulated by a suitably authorised sealing application.

    Read the article

  • UPK Breakfast Event: "Getting It Done Right" - Independence, Ohio - November 8th

    - by Karen Rihs
    Join us for a UPK “Getting It Done Right” Breakfast Briefing Come for Breakfast. Leave Full of Knowledge. Join Oracle and Synaptis for a breakfast briefing event before you begin your day, and leave full with knowledge on how to reduce risk and increase user productivity. Oracle’s User Productivity Kit (UPK) can provide your organization with a single tool to provide learning and best practices for each area of the business and help ensure you’re “Getting It Done Right.”Learn from Deb Brown, Senior Solutions Consultant, Oracle, as she shows the UPK tool that can save project teams thousands of hours through automation as well as provide greater visibility into application rollouts and business processes. Also hear from a UPK Customer as they share their company’s success with Oracle UPK.  Learn how UPK insures rapid user adoption; significantly lowers development, system testing, and user enablement time and costs; and mitigates project risk. Finally, Pat Tierney, Oracle Practice Director - Synaptis and Jordan Collard, VP Sales - Synaptis, will conclude with an outline of their success as a UPK implementation partner. Register Now Thursday,November 8, 20127:30 a.m. – 10:00 a.m.Embassy Suites Cleveland – Rockside5800 Rockside Woods BoulevardIndependence, OH 44131Directions Agenda 7:30 a.m. Event Arrival / Registration. Breakfast Served. 8:00 a.m. Deb Brown, Senior Solutions Consultant, Oracle Oracle UPK – A Closer Look at Getting It Done, Right. Ensure End User Adoption. 8:40 a.m. UPK Customer Success Story 9:30 a.m. Pat Tierney, Oracle Practice Director - Synaptis and Jordan Collard, VP Sales - Synaptis – Implementation Partner - Using Oracle UPK Before, During and After Application Rollouts 9:50 a.m. Wrap Up   Don’t miss this special Breakfast Briefing and get a jump start on Oracle UPK technology. Please call 1.800.820.5592 ext. 11030 or Click here to RSVP for this exclusive event! Sponsored bySynaptisSynaptis is an Oracle Gold Partner, providing UPK training, implementation, content creation and post go-live support for organizations since 1999.     If you are an employee or official of a government organization, please click here for important ethics information regarding this event.  

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >