Search Results

Search found 4643 results on 186 pages for 'matlab deployment'.

Page 94/186 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • ASP.NET "Object reference not set..." error

    - by Roman
    Hi, I have a website written using ASP.NET. We have a development machine and a deployment server. The site works great on the development machine, but when is transfered (using simple FTP Upload) generates strange behavior. It starts working just fine, but after a while stops working and throws an exception "Exception: Object reference not set to an instance of an object.". The deal is that the absolute path of the website on the development machine is different than on the deployment server (and why should they be similar?) and the exact error is: Exception: Object reference not set to an instance of an object. at SOMEPROJECT_Objects.Player..ctor(Int32 PlayerID) in C:\inetpub\wwwroot\SOMEPROJECTSolution\ALLPROJECT\SOMEPROJECT_Objects\Player.cs:line 123 at SOMEPROJECT_GameLayer.M_Game.PlayerActiveGame(Int32 PlayerID) in C:\inetpub\wwwroot\SOMEPROJECTSolution\ALLPROJECT\SOMEPROJECT_GameLayer\M_Game.cs:line 85 at Web.getsms.Page_Load(Object sender, EventArgs e) in C:\inetpub\wwwroot\SOMEPROJECTSolution\ALLPROJECT\SOMEPROJECT-sms\Web\getsms.aspx.cs:line 59 The address that it is looking for is the address on the DEVELOPMENT machine, where as the site now resides on the deployment server. Any ideas why this happens would be appreciated. Thanks, Roman

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • getting service from wsdd via xpath not wroking

    - by subes
    Hi, I am trying to get the XPath "/deployment/service". Tested on this site: http://www.xmlme.com/XpathTool.aspx <?xml version="1.0" encoding="UTF-8" standalone="no"?> <deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org /axis/wsdd/providers/java"> <service name="kontowebservice" provider="java:RPC" style="rpc" use="literal"> <parameter name="wsdlTargetNamespace" value="http://strategies.spine"/> <parameter name="wsdlServiceElement" value="ExposerService"/> <parameter name="wsdlServicePort" value="kontowebservice"/> <parameter name="className" value="dmd4biz.container.webservice.konto.internal.KontoWebServiceImpl_WS"/> <parameter name="wsdlPortType" value="Exposer"/> <parameter name="typeMappingVersion" value="1.2"/> <operation xmlns:operNS="http://strategies.spine" xmlns:rtns="http://www.w3.org/2001/XMLSchema" name="expose" qname="operNS:expose" returnQName="exposeReturn" returnType="rtns:anyType" soapAction=""> <parameter xmlns:tns="http://www.w3.org/2001/XMLSchema" qname="in0" type="tns:anyType"/> </operation> <parameter name="allowedMethods" value="expose"/> <parameter name="scope" value="Request"/> </service> </deployment> I absolutely can't find out why it always tells me that my xpath does not match... This may be stupid, but am I missing something?

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • Book/topic recommendations for a programmer returning to programming.

    - by Jason Tan
    I used to be a developer in Java, PHP, perl and C/C++ (the C++ bit badly - the others not too badly, I hope). This was back in the Java 1.3/1.4 days. We used raw JDBC, swing, servlets, JSP and ant (sometimes even make). Eclipse was new. Then I joined a deployment team and became a deployment engineer and then after the deployment engineer work became a full time sys admin.You get the idea - my experience is a generation or two old in programming terms - maybe older. I'm interested in getting back into Java and perhaps Ruby development, but feel I will be waaaaay behind the technological 8 ball. Can you folks suggest some books (or sites) that would be worth reading to catch up with the last 5-10 years of the development world. I.e. what should I read to try and catch up with where development is now? I see lots of stuff on the web, but what are people in the fabled "real world" using? (are lots of people being SOA based apps? Are they using XP methodology) The sorts of things I'm interested in finding out about/catching up on are: Methodologies Design patterns APIs/Frameworks/Technologies Other stuff you deem current/interesting/relevant. So if you have any thoughts or can recommend any books (especially new classics - you know the 's equivalent to K&R C or "The mythical man month"). Thanks for any thoughts you might share.

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • setup Qt and PyQt on mac osx so my app can also deployable on windows

    - by hk_programmer
    Hi, I've been coding with Python and C++ and now need to work on building a gui for data visualization purposes. I work on Mac Snow Leopard (intel), python 3.1 using gcc 4.2.1 (from Xcode 3.1) I wanted to first install Qt and then PyQt. And my goals are to be able to: - quickly prototype GUI and the accompanied logic that drives the GUI using PyQt and python - if I decided I need the speed, or if it's fairly easy to translate my GUI into C++ using the Qt tools, I have the options to translate my app into C++ - Be able to deploy my application onto Windows (both the python and c++ version of my app) Give the goals above, what are the correct steps I should take and what issues i should be aware of when setting up Qt and PyQt. Which other deployment tools do I need? From my readings so far, here's what I have: download the Qt source for mac and configure it with -platform macx-g++42 -arch x86_64 -no-framework (i've read somewhere that building as framework causes some trouble in deployment and/or debugging, can't find the article anymore) download latest SIP source and build download latest PyQt and build from source (any special options I should pay attention to?) For deployment, I've read that I would need to use py2exe/cx_freeze for windows, p2app for mac: http://arstechnica.com/open-source/guides/2009/03/how-to-deploying-pyqt-applications-on-windows-and-mac-os-x.ars but seems like what the article describe is deploying an app you build on windows on the windows platform and vice versa. How do you deploy to windows (is it even possible?) if you are writing your Qt app on a mac ? Really appreciate the help

    Read the article

  • RPC command to initiate a software install

    - by ericmayo
    I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products. The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite. Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software. I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished. My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install. What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message. To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program. My programming language is C/C++ Any thoughts/suggestions appreciated.

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • MySQL connection attempt works fine in 5.2.9 but not in 5.3.0 - Help?

    - by Rich
    Hi, I'm having trouble making a secondary MySQL connection (to a separate, external DB) in my code. It works fine in PHP 5.2.9 but fails to connect in PHP 5.3.0. I'm aware of (at least some) of the changes needed to make successful MySQL connections in the newer version of PHP, and have succeeded before, so I'm not sure why it isn't working this time. I already have a db connection open to a local database. This function below is then used to make an additional connection to a separate, remote directory. The included config file simply contains the external database details (host, user, pass and name). I have checked and it is being included correctly. function connectDP() { global $dpConnection; include("secondary_db_config.php); $dpConnection = mysql_connect($dp_dbHost, $dp_dbUser, $dp_dbPass, true) or DIE("ERROR: Unable to connect to Deployment Platform"); mysql_select_db($dp_dbName, $dpConnection) or DIE("ERROR 006: Unable to select Deployment Platform Database"); } I then attempt to make this new connection simply by calling this function externally: connectDP(); But when loading the page (in 5.3.0), I get the message: ERROR: Unable to connect to Deployment Platform I'm using the optional new_link flag boolean as the fourth argument in the mysql_connect() function and it's still not working. I've been wracking my brain this morning trying to figure out why this connection doesn't work (while I've done something very similar elsewhere to a separate second database that does work). Any help would be appreciated. Thanks! Rich

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • vSphere Client vCenter Template Customization Specification Using Windows Sysprep Unattended Answer XML File

    - by Brian
    I'm trying to setup a vSphere Client vCenter v5.0.0 Build 455964 Template Customization Specification using a Windows Sysprep unattended answer XML file for Win2008R2. However I didn't know how Sysprep worked before attempting this so it was a time-consuming nightmare (even after reviewing VMware vSphere ESXi 5's documentation)! I think I've figure out what I'm supposed to be doing, but it's still not working. The biggest problem at this point is that vSphere Client vCenter Customization Specification IP address information is not sticking when I load a Sysprep XML file with just 1 basic setting! This can only be a bug. Here is the process I'm using: PROCESS for Windows - vSphere Client Install Windows OS install VM Tools customize Windows (GPOs can be used to do this after deployment) install Applications (GPOs can be used to do this after deployment too) shutdown the VM convert the VM to a template create a custom Windows Sysprep XML answer file with desired customizations View Management Customization Specifications Manager create "New" Specification for "Target Virtual Machine OS" select Windows check "Use Custom Sysprep Answer File" (ADDS: Custom Sysprep File. KEEPS: Network (IP), Operating System Options (SID, Sysprep /generalize). REPLACES: Registration Information of Owner Name & Organization, Computer Name, Windows License (Key), Administrator Password, Time Zone, Run Once, Workgroup or Domain) name it as "VMwareCS-OS####R#x32/64w/Sysprep-TEST" (CS=Customization Specification) set Description as "Created YYYY/MM/DD by FLast" NEXT import a Sysprep answer file from secure location NEXT Custom settings NEXT click "..." box to right of "Use DHCP" set "Use the following IP settings:" for "IP Address" fill out the first 2 octets set appropriate values for other 2-3 fields set DNS server addresses OK NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish View Inventory VMs and Templates right-click previously completed template Deploy Virtual Machine from this Template provide the new OS name (max15char) select inventory location NEXT select Host/Cluster (wait for validation to succeed) NEXT select Resource Pool (wait for validation to succeed) NEXT select Storage location NEXT check "Power on this virtual machine after creation" select "Customize using an existing customization specification" select desired specification select "Use the Customization Wizard to temporarily adjust the specification before deployment" NEXT NEXT Custom settings? NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish Finish. I know a community member named "brian" (http://serverfault.com/users/25904/brian) has worked with this scenario before, but I couldn't figure out how to contact him directly, so Brian if you see this message could you provide some information to help? Thanks, Brian

    Read the article

  • Can an installation of SSRS be used for other reports if the SCOM Reporting Role is installed?

    - by Pete Davis
    I'm currently in the process of planing a SCOM 2007 R2 deployment and would like to deploy the OperationsManagerDW and Reporting Server to a shared SQL 2008 cluster which is used for reporting across multiple solutions. However in the in the deployment guide for SCOM 2007 R2 it says: Due to changes that the Operations Manager 2007 Reporting component makes to SQL Server Reporting Services security, no other applications that make use of SQL Server Reporting Services can be installed on this server. Which concerns me that it may interfere with existing or future (non SCOM) reports in some way even if deployed as a separate SSRS instance. Later in the same guide it states: Installing Operations Manager 2007 Reporting Services integrates the security of the instance of SQL Reporting Services with the Operations Manager role-based security. Do not install any other Reporting Services applications in this same instance of SQL Server. Does this mean that I can install a new SSRS instance and use this on the shared cluster for SCOM reporting or that I'd also need to create a whole new SQL Server instance as well as SSRS instance or I'd need a whole separate server for SCOM OperationsManagerDW and Reporitng Server?

    Read the article

  • matplotlib.pyplot/pylab not updating figure while isinteractive(), using ipython -pylab

    - by NumberOverZero
    There are a lot of questions about matplotlib, pylab, pyplot, ipython, so I'm sorry if you're sick of seeing this asked. I'll try to be as specific as I can, because I've been looking through people's questions and looking at documentation for pyplot and pylab, and I still am not sure what I'm doing wrong. On with the code: Goal: plot a figure every .5 seconds, and update the figure as soon as the plot command is called. My attempt at coding this follows (running on ipython -pylab): import time ion() x=linspace(-1,1,51) plot(sin(x)) for i in range(10): plot([sin(i+j) for j in x]) #see ** print i time.sleep(1) print 'Done' It correctly plots each line, but not until it has exited the for loop. I have tried forcing a redraw by putting draw() where ** is, but that doesn't seem to work either. Ideally, I'd like to have it simply add each line, instead of doing a full redraw. If redrawing is required however, that's fine. Additional attempts at solving: just after ion(), tried adding hold(True) to no avail. for kicks tried show() for ** The closest answer I've found to what I'm trying to do was at http://stackoverflow.com/questions/2310851/plotting-lines-without-blocking-execution, but show() isn't doing anything. I apologize if this is a straightforward request, and I'm looking past something so obvious. For what it's worth, this came up while I was trying to convert matlab code from class to some python for my own use. The original matlab (initializations removed) which I have been trying to convert follows: for i=1:time plot(u) hold on pause(.01) for j=2:n-1 v(j)=u(j)-2*u(j-1) end v(1)= pi u=v end Any help, even if it's just "look up this_method" would be excellent, so I can at least narrow my efforts to figuring out how to use that method. If there's any more information that would be useful, let me know.

    Read the article

  • Guacamole on KVM

    - by Siem Hermans
    I currently run a deployment where I provide virtual machines to circa 25 users through ESXi over the VMWare WSX web portal. This works, no doubt about it, it is fast, stable and reliable enough for the end users. However I stumbled across the Guacamole project (Link: http://guac-dev.org/) and the KVM project (Link: http://www.linux-kvm.org/). I must say I am no genius when it comes to Linux but I am interested in replacing the ESXi and WSX combination with a Guacamole and KVM deployment. I have seen people across the internet use ESXi in combination with Guacamole (mostly prior to the release of WSX), but I have yet to see someone use it in conjunction with KVM. Considering my amount of knowledge on Linux in general I would like to ask: Is it possible at all to combine the two?

    Read the article

  • DOS batch file to enter commands in proprietary java app and receive feedback?

    - by Justine
    Hello, I'm working on a project in which I'd like to be able to turn lights on and off in the Duke Smart Home via a high frequency chirp. The lighting system is called Clipsal Square-D and the program that gives a user access to the lighting controls is called CGate. I was planning on doing some signal processing in Matlab, then create a batch file from Matlab to interact with Cgate. Cgate is a proprietary Java app that, if run from a DOS command line, opens up another window that looks like the command prompt. I have a batch file that can check to see if Cgate is running and if not, open it. But what I can't figure out how to do is actually run commands in the Cgate program from the batch file and likewise, take the response from Cgate. An example of such a command is "noop," which should return "200 OK." Any help would be much appreciated! Thank you very much in advance :) (here's my existing batch file by the way) @ECHO off goto checkIfOpen :checkIfOpen REM pv finds all open processes and puts it in result.txt %SystemRoot%\pv\pv.exe %SystemRoot%\pv\pv.exe result.txt REM if result has the word notepad in it then notepad is running REM if not then it opens notepad FIND "notepad.exe" result.txt IF ERRORLEVEL 1 START %SystemRoot%\system32\Clipsal\C-Gate2\cgate.exe goto end :end

    Read the article

  • Not able to save Global navigation - SharePoint 2007

    - by Ryan
    I have migrated my sitecollection(migsitecollection) to different farm using content deployment job. http://vsmoss/sites/migsitecollection I used collaboration portal to create it.Its working fine from where I migrated it but after running content deployment jobs my new migrated site global navigation settings are not getting saved when I am trying ot change them by going in settings-Navigation and in logs I can see this error The SPNavigation store is likely corrupt. I saw on net the solution for this problem is changing onet.xml and running script on sql database for the site, I am eager to better answer than this but if its the same I have few doubts on it: First,As my site template is not customised its the collobartion portal so I am not sure where exactly to change the onet.xml. Second, I am using the same database as of my webapplication running that script would not affect anything else on the main site of mine?

    Read the article

  • I want to install an MSI twice

    - by don.vince
    I have a peculiar wish to install an msi twice on a machine. The purpose of the double install is to first install under the pre-production folder, run the deployment in a safe environment prior to deploying in the production folder. We typically use separate machines to represent these different environments however in this case I need to use the same box. The two scenarios I get are as follows: I've installed pre-production, I'm happy, I want to install production, I run the msi, it asks whether I want to repair or remove the installation I've production installed, I want to install the new version of the msi, it tells me I already have a version of the product installed and I must first un-install the current version The first scenario isn't too bad as we can at that point sensibly un-install and re-install under the production folder, but the second scenario is a pain as we don't want to un-install the live production deployment. Is there a setting I can give to msiexec that will allow this? Is there a more suitable different approach I could use?

    Read the article

  • Run Rails 3 app on a Rails 2 server/machine?

    - by chucknelson
    I'm trying to run a Rail 3 (3.0.10) app on a shared joyent smartmachine server (I don't have root access) which has Rails 2 (2.3.11) installed , and I'm not sure what to do after I freeze my Rails 3 app with bundle install --deployment. It seems like with the Rails 3 and bundler gems not being installed on the server locally, my app isn't even recognizing the local version of Rails I have frozen with my app. Has anyone gotten this to work, or have any advice? The server runs Apache, and I think I can get lighttpd installed too - but I'd rather stay with Apache if I can. Also, if it matters, Passenger is not an installed gem either...and I'm not sure I can freeze that with my app. Update 11/30/2011 12:30 PM EST Bundler is not installed on this server, either. Not sure if having that would enable the new Rails 3 "freeze" (bundle --deployment) to work or not...

    Read the article

  • Industry-style practices for increasing productivity in a small scientific environment

    - by drachenfels
    Hi, I work in a small, independent scientific lab in a university in the United States, and it has come to my notice that, compared with a lot of practices that are ostensibly followed in the industry, like daily checkout into a version control system, use of a single IDE/editor for all languages (like emacs), etc, we follow rather shoddy programming practices. So, I was thinking of getting together all my programs, scripts, etc, and building a streamlined environment to increase productivity. I'd like suggestions from people on Stack Overflow for the same. Here is my primary plan.: I use MATLAB, C and Python scripts, and I'd like to edit, compile them from a single editor, and ensure correct version control. (questions/things for which I'd like suggestions are in italics) 1] Install Cygwin, and get it to work well with Windows so I can use git or a similar version control system (is there a DVCS which can work directly from the windows CLI, so I can skip the Cygwin step?). 2] Set up emacs to work with C, Python, and MATLAB files, so I can edit and compile all three at once from a single editor (say, emacs) (I'm not very familiar with the emacs menu, but is there a way to set the path to the compiler for certain languages? I know I can Google this, but emacs documentation has proved very hard for me to read so far, so I'd appreciate it if someone told me in simple language) 3] Start checking in code at the end of each day or half-day so as to maintain a proper path of progress of my code (two questions), can you checkout files directly from emacs? is there a way to checkout LabVIEW files into a DVCS like git? Lastly, I'd like to apologize for the rather vague nature of the question, and hope I shall learn to ask better questions over time. I'd appreciate it if people gave their suggestions, though, and point to any resources which may help me learn.

    Read the article

  • Can't get virtual desktops to show up on RDWeb for Server 2012 R2

    - by Scott Chamberlain
    I built a test lab using the Windows Server 2012 R2 Preview. The initial test lab has the following configuration (I have replaced our name with "OurCompanyName" because I would like it if Google searches for our name did not cause people to come to this site, please do the same in any responses) Physical hardware running Windows Server 2012 R2 Preview full GUI, acting as Hyper-V host (joined to the test domain as testVwHost.testVw.OurCompanyName.com) with the following VM's running on it VM running 2012 R2 Core acting as domain controller for the forest testVw.OurCompanyName.com (testDC.testVw.OurCompanyName.com) VM running 2012 R2 Core with nothing running on it joined to the test domain as testIIS.testVw.OurCompanyName.com A clean install of Windows 7, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it A clean install of Windows 8, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it I then ran "Add Roles and Features" from testVwHost and chose the "Remote Desktop Services Installation", "Standard Deployment", "Virtual machine-based desktop deployment". I choose testIIS for the roles "RD Connection Broker" and "RD Web Access" and testVwHost as "RD Virtualization Host" The Install of the roles went fine, I then went to Remote Desktop Services in server manager and wet to setup Deployment Properties. I set the certificate for all 3 roles to our certificate signed by a CA for *.OurCompanyName.com. I then created a new Virtual Desktop Collection for Windows 7 and Windows 8 and both where created without issue. On the Windows 7 pool I added RemoteApp to launch WordPad, For windows 8 I did not add any RemoteApp programs. Everything now appears to be fine from a setup perspective however if I go to https://testIIS.testVw.OurCompanyName.com/RDWeb and log in as the use Administrator (or any orher user) I don't see the virtual desktops I created nor the RemoteApp publishing of WordPad. I tried adding a licensing server, using testDC as the server but that made no difference. What step did I miss in setting this up that is causing this not to show up on RDWeb? If any additional information is needed pleas let me know. I have tried every possible thing I can think of and I am just groping around in the dark now. The virtual machines running on testVwHost The configuration screen for RD Services The Windows 7 Pool The Windows 8 Pool This is logged in as testVw\Administrator

    Read the article

  • how to manage credentials/access to multiple ssh servers

    - by geoaxis
    I would like to make a script which can maintain multiple servers via SSH. I want to control the authentication/authorization in such a manner that authentication is done by gateway and any other access is routed through this ssh server to internal services without any further authentication/authorization requirements. So if a user A can log into server_1 for example. He can then ssh to server_2 without any other authentication and do what ever he is allowed to do on server_2 (like shut down mysql, upgrade it and restart it. This could be done via some remote shell script). The problem that I am trying to solve is to come up with a deployment script for a JavaEE system which involves databases and tomcat instances. They need to be shutdown and re-spawned. The requirement is to have a deployment script which has minimal human interaction as possible for both developers and operation.

    Read the article

  • Cannot change font size /type in plots

    - by Sameet Nabar
    I recently had to re-install my operating system (Ubuntu). The only thing I did differently is that I installed Matlab on a separate partition, not the main Ubuntu partition. After re-installing, the fonts in my plots are no longer configurable. For example, if I ask the title font to be bold, it doesn't happen. I ran the sample code below on my computer and then on my colleague's computer and the 2 results are attached. This cannot be a problem with the code; rather in the settings of Matlab. Could somebody please tell me what settings I need to change? Thanks in advance for your help. Regards, Sameet. x1=-pi:.1:pi; x2=-pi:pi/10:pi; y1=sin(x1); y2=tan(sin(x2)) - sin(tan(x2)); [AX,H1,H2]=plotyy(x1,y1,x2,y2); xlabel ('Time (hh:mm)'); ylabel (AX(1), 'Plot1'); ylabel (AX(2), 'Plot2'); axes(AX(2)) set(H1,'linestyle','none','marker','.'); set(H2,'linestyle','none','marker','.'); title('Plot Title','FontWeight','bold'); set(gcf, 'Visible', 'off'); [legh, objh] = legend([H1 H2],'Plot1', 'Plot2','location','Best'); set(legend,'FontSize',8); print -dpng Trial.png; Bad image: http://imageshack.us/photo/my-images/708/trial1u.png/ Good image: http://imageshack.us/photo/my-images/87/trial2.png/

    Read the article

  • Insufficient Permissions Problems with MSDeploy and TFS Build 2010

    - by jdanforth
    I ran into these problems on a TFS 2010 RC setup where I wanted to deploy a web site as part of the nightly build: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed.(An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'.)  An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'. Filename: \\?\C:\Windows\system32\inetsrv\config\redirection.config Error: Cannot read configuration file due to insufficient permissions  As you can see I’m running the build service as NETWORK SERVICE which is quite usual. The first thing I did then was to give NETWORK SERVICE read access to the whole directory where redirection.config is sitting; C:\Windows\system32\inetsrv\config. That gave me a new error: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed. (Attempted to perform an unauthorized operation.) The reason for this problem was that NETWORK SERVICE didn’t have write permission to the place where I’ve told MSDeploy to put the web site physically on the disk. Once I’d given the NETWORK SERVICE the right permissions, MSDeploy completed as expected! NOTE! I’ve not had this problem with TFS 2010 RTM, so it might be just a RC issue!

    Read the article

  • Two Virtualization Webinars This Week

    - by chris.kawalek(at)oracle.com
    If you're interested in virtualization, be sure to catch our two free webinars this week. You'll hear directly from Oracle technologists and can ask questions in a live Q&A. Deploying Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications Tuesday, Feb 15, 2011 9AM Pacific Time Register Now Is your company trying to manage costs; meet or beat service level agreements and get employees up and running quickly on business-critical applications like Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications? The fastest way to get the benefits of these applications deployed in your organization is with Oracle VM Templates. Cut application deployment time from weeks to just hours or days. Attend this session for the technical details of how your IT department can deliver rapid software deployment and eliminate installation and configuration costs by providing pre-installed and pre-configured software images. Increasing Desktop Security for the Public Sector with Oracle Desktop Virtualization Thursday, Feb 17, 2011 9AM Pacific Time Register Now Security of data as it moves across desktop devices is a concern for all industries. But organizations such as law enforcement, local, state, and federal government and others have higher security ne! eds than most. A virtual desktop model, where no data is ever stored on the local device, is an ideal architecture for these organizations to deploy. Oracle's comprehensive portfolio of desktop virtualization solutions, from thin client devices, to sever side management and desktop hosting software, provide a complete solution for this ever-increasing problem.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >