Search Results

Search found 40226 results on 1610 pages for 'object relational model'.

Page 190/1610 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Call to a member function get_segment() error

    - by hogofwar
    I'm having this problem with this piece of PHP code: class Core { public function start() { require("funk/funks/libraries/uri.php"); $this->uri = new uri(); require("funk/core/loader.php"); $this->load = new loader(); if($this->uri->get_segment(1) != "" and file_exists("funk/pages/".$uri->get_segment(1).".php")){ Only a snippet of the code The best way I can explain it is that it is a class calling upon another class (uri.php) and i am getting the error: Fatal error: Call to a member function get_segment() on a non-object in /home/eeeee/public_html/private/funkyphp/funk/core/core.php on line 11 (the if($this-uri-get_segment(1) part) I'm having this problem a lot and it is really bugging me. the library code is: <?php class uri { private $server_path_info = ''; private $segment = array(); private $segments = 0; public function __construct() { $segment_temp = array(); $this->server_path_info = preg_replace("/\?/", "", $_SERVER["PATH_INFO"]); $segment_temp = explode("/", $this->server_path_info); foreach ($segment_temp as $key => $seg) { if (!preg_match("/([a-zA-Z0-9\.\_\-]+)/", $seg) || empty($seg)) unset($segment_temp[$key]); } foreach ($segment_temp as $k => $value) { $this->segment[] = $value; } unset($segment_temp); $this->segments = count($this->segment); } public function segment_exists($id = 0) { $id = (int)$id; if (isset($this->segment[$id])) return true; else return false; } public function get_segment($id = 0) { $id--; $id = (int)$id; if ($this->segment_exists($id) === true) return $this->segment[$id]; else return false; } } ?>

    Read the article

  • (illustrator scripting) how to release objects to layers(sequence) but retain object's name?

    - by Eugene
    Hi, I've asked the same question to adobe scripting forum. but seems there are not many forum users there, and I'm asking the same question here. I was writing my first illustrator script to export all layers to png files with layer structure converted to file structure. I found there are many layers that my script can't detect and found out that they are not actually layers but objects(or groups). Found a layer menu (release to layers), but doesn't quite solve the problem perfectly. initially I have this layer 001 object 01 object 1 object 2 release to layers(sequence) on object 01 gives layer 001 layer 91(whatever the layer number illustrator gives when release to layers are performed) layer 92 object 1 layer 93 object 2 now I need to convert them to (so that layer can retain the name of the object) layer 001 layer 01 layer 1 object 1 layer 2 object 2 I have hundreds of layer 001, and couple thousands of object 01, and wonder if anything can be done with script to do this... If it's possible to detect object in a script, I could rewrite the 'release to layers' functionality in script maybe. Any help would be greatly appreciated.

    Read the article

  • How should I handle expected errors? eg. "username already exists"

    - by Pheter
    I am struggling to understand how I should design the error handling parts of my code. I recently asked a similar question about how I should go about returning server error codes to the user, eg. 404 errors. I learnt that I should handle the error from within the current part of the application; seem's simple enough. However, what should I do when I can't handle the error from the current link in the chain? For example, I may have a class that is used to manage authentication. One of it's methods could be createUser($username, $password). Within that function, I need to determine if the username already exists. If this is true, how should I alert the calling code about this? Returning null instead of a user object is one way. But how do I then know what caused the error? How should I handle errors in such a way that calling code can easily find out what caused the error? Is there a design pattern commonly used for this kind of situation?

    Read the article

  • PHP classes, parse syntax errors when using 'var' to declare variables

    - by jon
    I am a C# guy trying to translate some of my OOP understanding over to php. I'm trying to make my first class object, and are hitting a few hitches. Here is the beginning of the class: <?php require("Database/UserDB.php"); class User { private var $uid; private var $username; private var $password; private var $realname; private var $email; private var $address; private var $phone; private var $projectArray; public function _construct($username) { $userArray = UserDB::GetUserArray($username); $uid = $userArray['uid']; $username = $userArray['username']; $realname = $userArray['realname']; $email = $userArray['email']; $phone = $userArray['phone']; $i = 1; $projectArray = UserDB::GetUserProjects($this->GetID()); while($projectArray[$i] != null) { $projectArray[$i] = new Project($projectArray[$i]); } UserDB.php is where I have all my static functions interacting with the Database for this User Class. I am getting errors using when I use var, and I'm getting confused. I know I don't HAVE to use var, or declare the variables at all, but I feel it is a better practice to do so. the error is "unexpected T_VAR, expecting T_VARIABLE" When I simply remove var from the declarations it works. Why is this?

    Read the article

  • Why Is Java Missing Access Specifiers?

    - by Tom Tresansky
    Does anyone understand why Java is missing: An access specifier which allows access by the class and all subclasses, but NOT by other classes in the same package? (Protected-minus) An access specifier which allows access by the class, all classes in the same package, AND all classes in any sub-package? (Default-plus) An access specifier which adds classes in sub-packages to the entities currently allowed access by protected? (Protected-plus) I wish I had more choices than protected and default. In particular, I'm interested in the Protected-plus option. Say I want to use a Builder/Factory patterned class to produce an object with many links to other objects. The constructors on the objects are all default, because I want to force you to use the factory class to produce instances, in order to make sure the linking is done correctly. I want to group the factories in a sub-package to keep them all together and distinct from the objects they are instantiating---this just seems like a cleaner package structure to me. No can do, currently. I have to put the builders in the same package as the objects they are constructing, in order to gain the access to defaults. But separating project.area.objects from project.area.objects.builders would be so nice. So why is Java lacking these options? And, is there anyway to fake it?

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Sharing data between graphics and physics engine in the game?

    - by PolGraphic
    I'm writing the game engine that consists of few modules. Two of them are the graphics engine and the physics engine. I wonder if it's a good solution to share data between them? Two ways (sharing or not) looks like that: Without sharing data GraphicsModel{ //some common for graphics and physics data like position //some only graphic data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel{ //some common for graphics and physics data like position //some only physics data //usually my physics data contains A LOT more informations than graphics data } engine3D->createModel3D(...); physicsEngine->createModel3D(...); //connect graphics and physics data //e.g. update graphics model's position when physics model's position will change I see two main problems: A lot of redundant data (like two positions for both physics and graphics data) Problem with updating data (I have to manually update graphics data when physics data changes) With sharing data Model{ //some common for graphics and physics data like position }; GraphicModel : public Model{ //some only graphics data //like textures and detailed model's verticles that physics doesn't need }; PhysicsModel : public Model{ //some only physics data //usually my physics data contains A LOT more informations than graphics data } model = engine3D->createModel3D(...); physicsEngine->assingModel3D(&model); //will cast to //PhysicsModel for it's purposes?? //when physics changes anything (like position) in model //(which it treats like PhysicsModel), the position for graphics data //will change as well (because it's the same model) Problems here: physicsEngine cannot create new objects, just "assing" existing ones from engine3D (somehow it looks more anti-independent for me) Casting data in assingModel3D function physicsEngine and graphicsEngine must be careful - they cannot delete data when they don't need them (because second one may need it). But it's rare situation. Moreover, they can just delete the pointer, not the object. Or we can assume that graphicsEngine will delete objects, physicsEngine just pointers to them. Which way is better? Which will produce more problems in the future? I like the second solution more, but I wonder why most graphics and physics engines prefer the first one (maybe because they normally make only graphics or only physics engine and somebody else connect them in the game?). Have they any more hidden pros & contras?

    Read the article

  • SAP dévoile Business Object 4.0, la nouvelle version de sa solution BI intègre la mobilité, les réseaux sociaux et le « in-memory »

    SAP dévoile Business Object 4.0 La nouvelle version de sa solution BI intègre la mobilité, les réseaux sociaux et le « in-memory » SAP vient de dévoiler Business Object 4.0, la prochaine version de sa plate-forme de nouvelle génération de Business Intelligence et de Gestion d'Information d'Entreprise (EIM). [IMG]http://ftp-developpez.com/gordon-fowler/SAP/Slide-5-SAP-BusinessObjects-4.0-Event-Insight2.jpg[/IMG] Après SAP ByDesign 2.6, sa suite ERP en mode SaaS (qui arrive avec un tout nouveau SDK), Business Object 4.0 est la deuxième très grosse annonce de cette année 2011 que Nicolas Sekkaki, Direc...

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • When i trid to install ubuntu 11.10 i get an error '"Windows Backend object has no attribute 'iso-path' - see log for details.'

    - by Raja
    I am trying to install Ubuntu 11.10 in windows XP, Everything went as before until the countdown clock reached zero, then I got "Windows Backend object has no attribute 'iso-path' - see log for details. It's done it three times now. (Formatting in between) The end of the log says 11-01 17:20 DEBUG TaskList: New task check_iso 11-01 17:20 DEBUG TaskList: ### Running check_iso... 11-01 17:20 DEBUG CommonBackend: Checking Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: checking Ubuntu ISO Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: wrong size: 8094031872 900000000 11-01 17:20 DEBUG TaskList: ### Finished check_iso 11-01 17:20 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 11-01 17:20 DEBUG TaskList: # Cancelling tasklist 11-01 17:20 DEBUG TaskList: # Finished tasklist 11-01 17:20 ERROR root: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in call File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path'

    Read the article

  • '"Windows Backend object has no attribute 'iso-path' - see log for details.' error when trying to install

    - by Raja
    I am trying to install Ubuntu 11.10 in windows XP, Everything went as before until the countdown clock reached zero, then I got "Windows Backend object has no attribute 'iso-path' - see log for details. It's done it three times now. (Formatting in between) The end of the log says ====== 11-01 17:20 DEBUG TaskList: New task check_iso 11-01 17:20 DEBUG TaskList: ### Running check_iso... 11-01 17:20 DEBUG CommonBackend: Checking Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: checking Ubuntu ISO Y:\ubuntu\install\installation.iso 11-01 17:20 DEBUG Distro: wrong size: 8094031872 > 900000000 11-01 17:20 DEBUG TaskList: ### Finished check_iso 11-01 17:20 ERROR TaskList: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path' 11-01 17:20 DEBUG TaskList: # Cancelling tasklist 11-01 17:20 DEBUG TaskList: # Finished tasklist 11-01 17:20 ERROR root: 'WindowsBackend' object has no attribute 'iso_path' Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 130, in select_task File "\lib\wubi\application.py", line 205, in run_cd_menu File "\lib\wubi\application.py", line 120, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 579, in get_iso File "\lib\wubi\backends\common\backend.py", line 565, in use_iso AttributeError: 'WindowsBackend' object has no attribute 'iso_path'

    Read the article

  • Why do meshes show up as bones in the Model class?

    - by Itamar Marom
    Right now I'm working on a 3D game and I've come across something very weird. When I created the model in Blender, I added an armature named "MyBone" to the stage and attached a cube ("MyCube") to it, so that when I move the armature, the cube moves with it. I exported this as an FBX and loaded it as a Model object. What I expected to see was: But what I got was this: I'm really confused. Why is the mesh I created showing up in the bone list? And what's Root Node? Here are the .blend and .fbx files: here or here. Thanks.

    Read the article

  • Render an image with layers for shadows /reflections, object and ground in 3D Studio Max?

    - by Bernd Plontsch
    I have a scene with a simple object standing on the ground in the center. This object has shadows and reflections on the ground. How can I render an image containing 3 separate layers for The object The ground The reflection / shadow on the ground Which format do I use for this? (It should include all 3 layers + I should be able to enable/disable them in Photoshop) How do I define or prepare those layers for being rendering as image layers?

    Read the article

  • What should I call the process of converting an object to a string?

    - by shabbychef
    We are having a game of 'semantic football' in the office over this matter: I am writing a method for an object which will represent the object as a string. That string should be such that when typed (more likely, cut and pasted) into the interpreter window (I will keep the language name out of this for now), will produce an object which is, for our purposes, identical to the one upon which the method was called. There is a spirited discussion over the 'best' name for this method. The terms pickle, serialize, deflate, etc have been proposed. However, it seems that those terms assume some process for the de-pickling (unserialization, etc) that is not necessarily the language interpreter itself. That is, they do not specifically refer to the case where strings of valid code are produced. This is closer to a quine, but we are re-producing the object not the code, so this is not quite right. any suggestions?

    Read the article

  • How to make an object move again after being stopped by collision in Unity?

    - by Matthew Underwood
    I have a player object which position is always centered on the main camera's viewport. This object has a Rigidbody 2D, a box and circle collider. The player moves around a level, the level has a polygon collider attached. I move the camera until the object hits against the collider, which stops the movement of the camera by setting its speed to 0. The problem happens when I want to move the camera / player object away from the collider. As the speed is already at 0, it cannot move away from the collider. The script attached to the player object, checks for collisions and applies the speed to 0 on the main camera's test script. using UnityEngine; using System.Collections; public class move : MonoBehaviour { public float speed; public test testing; // Use this for initialization void Start () { speed = 10F; testing = Camera.main.GetComponent<test>(); } // Update is called once per frame void FixedUpdate () { Vector3 p = Camera.main.ViewportToWorldPoint(new Vector3(0.5F, 0.5F, Camera.main.nearClipPlane)); transform.position = new Vector3(p.x, p.y, -1); } void OnCollisionEnter2D(Collision2D col) { testing.speed = 0; } void OnCollisionExit2D(Collision2D col) { testing.speed = 10F; } } This is the script attached to the main camera; just a simple script that changes the camera's position. using UnityEngine; using System.Collections; public class test : MonoBehaviour { public float speed; public float translationY; public float translationX; // Use this for initialization void Start () { speed = 10F; } void FixedUpdate () { translationY = Input.GetAxis("Vertical") * speed * Time.deltaTime; translationX = Input.GetAxis("Horizontal") * speed * Time.deltaTime; transform.Translate(translationX, translationY, 0); } } The player object isn't kinematic and is a fixed angle, the colliders aren't triggers and the polygon collider isn't a trigger either. The player is the red square, the collider is the pink area. -- EDIT -- From the latest change the collider set up for the player So if the X speed was disabled. It wouldnt move into the side of the polygon colider which is good, but yet you couldnt move away from it. And moving down would move inside the colider.

    Read the article

  • Let a model instance choose appropriate view class using category. Is it good design?

    - by Denis Mikhaylov
    Assume I have abstract base model class called MoneySource. And two realizations BankCard and CellularAccount. In MoneysSourceListViewController I want to display a list of them, but with ListItemView different for each MoneySource subclass. What if I define a category on MoneySource @interface MoneySource (ListItemView) - (Class)listItemViewClass; @end And then override it for each concrete sublcass of MoneySource, returning suitable view class. @implementation CellularAccount (ListItemView) - (Class)listItemViewClass { return [BankCardListView class]; } @end @implementation BankCard (ListItemView) - (Class)listItemViewClass { return [CellularAccountListView class]; } @end so I can ask model object about its view, not violating MVC principles, and avoiding class introspection or if constructions. Thank you!

    Read the article

  • Removing/Adding a specific variable from an object inside javascript array? [migrated]

    - by hustlerinc
    I have a map array with objects stuffed with variables looking like this: var map = [ [{ground:0, object:1}, {ground:0, item:2}, {ground:0, object:1, item:2}], [{ground:0, object:1}, {ground:0, item:2}, {ground:0, object:1, item:2}] ]; Now I would like to be able to delete and add one of the variables like item:2. 1) What would I use to delete specific variables? 2) What would I use to add specific variables? I just need 2 short lines of code, the rest like detecting if and where to execute I've figured out. I've tried delete map[i][j].item; with no results. Help appreciated.

    Read the article

  • "Access is denied" JavaScript error when trying to access the document object of a programmatically-

    - by Bungle
    I have project in which I need to create an <iframe> element using JavaScript and append it to the DOM. After that, I need to insert some content into the <iframe>. It's a widget that will be embedded in third-party websites. I don't set the "src" attribute of the <iframe> since I don't want to load a page; rather, it is used to isolate/sandbox the content that I insert into it so that I don't run into CSS or JavaScript conflicts with the parent page. I'm using JSONP to load some HTML content from a server and insert it in this <iframe>. I have this working fine, with one serious exception - if the document.domain property is set in the parent page (which it may be in certain environments in which this widget is deployed), Internet Explorer (probably all versions, but I've confirmed in 6, 7, and 8) gives me an "Access is denied" error when I try to access the document object of this <iframe> I've created. It doesn't happen in any other browsers I've tested in (all major modern ones). This makes some sense, since I'm aware that Internet Explorer requires you to set the document.domain of all windows/frames that will communicate with each other to the same value. However, I'm not aware of any way to set this value on a document that I can't access. Is anyone aware of a way to do this - somehow set the document.domain property of this dynamically created <iframe>? Or am I not looking at it from the right angle - is there another way to achieve what I'm going for without running into this problem? I do need to use an <iframe> in any case, as the isolated/sandboxed window is crucial to the functionality of this widget. Here's my test code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Document.domain Test</title> <script type="text/javascript"> document.domain = 'onespot.com'; // set the page's document.domain </script> </head> <body> <p>This is a paragraph above the &lt;iframe&gt;.</p> <div id="placeholder"></div> <p>This is a paragraph below the &lt;iframe&gt;.</p> <script type="text/javascript"> var iframe = document.createElement('iframe'), doc; // create <iframe> element document.getElementById('placeholder').appendChild(iframe); // append <iframe> element to the placeholder element setTimeout(function() { // set a timeout to give browsers a chance to recognize the <iframe> doc = iframe.contentWindow || iframe.contentDocument; // get a handle on the <iframe> document alert(doc); if (doc.document) { // HEREIN LIES THE PROBLEM doc = doc.document; } doc.body.innerHTML = '<h1>Hello!</h1>'; // add an element }, 10); </script> </body> </html> I've hosted it at: http://troy.onespot.com/static/access_denied.html As you'll see if you load this page in IE, at the point that I call alert(), I do have a handle on the window object of the <iframe>; I just can't get any deeper, into its document object. Thanks very much for any help or suggestions! I'll be indebted to whomever can help me find a solution to this.

    Read the article

  • JPA thinks I'm deleting a detached object

    - by Steve
    So, I've got a DAO that I used to load and save my domain objects using JPA. I finally managed to get the transaction stuff working (with a bunch of help from the folks here...), now I've got another issue. In my test case, I call my DAO to load a domain object with a given id, check that it got loaded and then call the same DAO to delete the object I just loaded. When I do that I get the following: java.lang.IllegalArgumentException: Removing a detached instance mil.navy.ndms.conops.common.model.impl.jpa.Group#10 at org.hibernate.ejb.event.EJB3DeleteEventListener.performDetachedEntityDeletionCheck(EJB3DeleteEventListener.java:45) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:108) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:74) at org.hibernate.impl.SessionImpl.fireDelete(SessionImpl.java:794) at org.hibernate.impl.SessionImpl.delete(SessionImpl.java:772) at org.hibernate.ejb.AbstractEntityManagerImpl.remove(AbstractEntityManagerImpl.java:253) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:180) at $Proxy27.remove(Unknown Source) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao.delete(GroupDao.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy28.delete(Unknown Source) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDaoTest.testGroupDaoSave(GroupDaoTest.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Now given that I'm using the same DAO instance, and I've not changed EntityManagers (unless Spring does so without letting me know), how can this be a detached object? My DAO code looks like this: public class GenericJPADao implements IWebDao, IDao, IDaoUtil { private static Logger logger = Logger.getLogger (GenericJPADao.class); protected Class voClass; @PersistenceContext(unitName = "CONOPS_PU") protected EntityManagerFactory emf; @PersistenceContext(unitName = "CONOPS_PU") protected EntityManager em; public GenericJPADao() { super ( ); ParameterizedType genericSuperclass = (ParameterizedType) getClass ( ).getGenericSuperclass ( ); this.voClass = (Class) genericSuperclass.getActualTypeArguments ( )[1]; } ... public void delete (INTFC modelObj, EntityManager em) { em.remove (modelObj); } @SuppressWarnings("unchecked") public INTFC findById (Long id) { return ((INTFC) em.find (voClass, id)); } } The test case code looks like: IGroup loadedGroup = dao.findById (group.getId ( )); assertNotNull (loadedGroup); assertEquals (group.getId ( ), loadedGroup.getId ( )); dao.delete (loadedGroup); // - This generates the above exception loadedGroup = dao.findById (group.getId ( )); assertNull(loadedGroup); Can anyone tell me what I'm doing wrong here? Thanks

    Read the article

  • JS: using 'var me = this' to reference an object instead of using a global array

    - by Marco Demaio
    The example below, is just an example, I know that I don't need an object to show an alert box when user clicks on div blocks, but it's just a simple example to explain a situation that frequently happens when writing JS code. In the example below I use a globally visible array of objects to keep a reference to each new created HelloObject, in this way events called when clicking on a div block can use the reference in the arry to call the HelloObject's public function hello(). 1st have a look at the code: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head><meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> <title>Test </title> <script type="text/javascript"> /***************************************************** Just a cross browser append event function, don't need to understand this one to answer my question *****************************************************/ function AppendEvent(html_element, event_name, event_function) {if(html_element) {if(html_element.attachEvent) html_element.attachEvent("on" + event_name, event_function); else if(html_element.addEventListener) html_element.addEventListener(event_name, event_function, false); }} /****************************************************** Just a test object ******************************************************/ var helloobjs = []; var HelloObject = function HelloObject(div_container) { //Adding this object in helloobjs array var id = helloobjs.length; helloobjs[id] = this; //Appending click event to show the hello window AppendEvent(div_container, 'click', function() { helloobjs[id].hello(); //THIS WORKS! }); /***************************************************/ this.hello = function() { alert('hello'); } } </script> </head><body> <div id="one">click me</div> <div id="two">click me</div> <script type="text/javascript"> var t = new HelloObject(document.getElementById('one')); var t = new HelloObject(document.getElementById('two')); </script> </body></html> In order to achive the same result I could simply replace the code //Appending click event to show the hello window AppendEvent(div_container, 'click', function() { helloobjs[id].hello(); //THIS WORKS! }); with this code: //Appending click event to show the hello window var me = this; AppendEvent(div_container, 'click', function() { me.hello(); //THIS WORKS TOO AND THE GLOBAL helloobjs ARRAY BECOMES SUPEFLOUS! }); thus would make the helloobjs array superflous. My question is: does this 2nd option in your opinion create memoy leaks on IE or strange cicular references that might lead to browsers going slow or to break??? I don't know how to explain, but coming from a background as a C/C++ coder, doing in this 2nd way sounds like a some sort of circular reference that might break memory at some point. I also read on internet about the IE closures memory leak issue http://jibbering.com/faq/faq_notes/closures.html (I don't know if it was fixed in IE7 and if yes, I hope it does not come out again in IE8). Thanks

    Read the article

  • EF Code first + Delete Child Object from Parent?

    - by ebb
    I have a one-to-many relationship between my table Case and my other table CaseReplies. I'm using EF Code First and now wants to delete a CaseReply from a Case object, however it seems impossible to do such thing because it just tries to remove the CaseId from the specific CaseReply record and not the record itself.. short: Case just removes the relationship between itself and the CaseReply.. it does not delete the CaseReply. My code: // Case.cs (Case Object) public class Case { [Key] public int Id { get; set; } public string Topic { get; set; } public string Message { get; set; } public DateTime Date { get; set; } public Guid UserId { get; set; } public virtual User User { get; set; } public virtual ICollection<CaseReply> Replies { get; set; } } // CaseReply.cs (CaseReply Object) public class CaseReply { [Key] public int Id { get; set; } public string Message { get; set; } public DateTime Date { get; set; } public int CaseId { get; set; } public Guid UserId { get; set; } public virtual User User { get; set; } public virtual Case Case { get; set; } } // RepositoryBase.cs public class RepositoryBase<T> : IRepository<T> where T : class { public IDbContext Context { get; private set; } public IDbSet<T> ObjectSet { get; private set; } public RepositoryBase(IDbContext context) { Contract.Requires(context != null); Context = context; if (context != null) { ObjectSet = Context.CreateDbSet<T>(); if (ObjectSet == null) { throw new InvalidOperationException(); } } } public IRepository<T> Remove(T entity) { ObjectSet.Remove(entity); return this; } public IRepository<T> SaveChanges() { Context.SaveChanges(); return this; } } // CaseRepository.cs public class CaseRepository : RepositoryBase<Case>, ICaseRepository { public CaseRepository(IDbContext context) : base(context) { Contract.Requires(context != null); } public bool RemoveCaseReplyFromCase(int caseId, int caseReplyId) { Case caseToRemoveReplyFrom = ObjectSet.Include(x => x.Replies).FirstOrDefault(x => x.Id == caseId); var delete = caseToRemoveReplyFrom.Replies.FirstOrDefault(x => x.Id == caseReplyId); caseToRemoveReplyFrom.Replies.Remove(delete); return Context.SaveChanges() >= 1; } } Thanks in advance.

    Read the article

  • Only root object on request is deserialized when using Message.GetBody<>

    - by user324627
    I am attempting to create a wcf service that accepts any input (Action="*") and then deserialize the message after determining its type. For the purposes of testing deserialization I am currently hard-coding the type for the test service. I get no errors from the deserialization process, but only the outer object is populated after deserialization occurs. All inner fields are null. I can process the same request against the original wcf service successfully. I am deserializing this way, where knownTypes is a type list of expected types: DataContractSerializer ser = new DataContractSerializer(new createEligibilityRuleSet ().GetType(), knownTypes); createEligibilityRuleSet newReq = buf.CreateMessage().GetBody<createEligibilityRuleSet>(ser); Here is the class and sub-classes of the request object. These classes are generated by svcutil using a top down approach from an existing wsdl. I have tried replacing the XmlTypeAttributes with DataContracts and the XmlElements with DataMembers with no difference. It is the instance of CreateEligibilityRuleSetSvcRequest on the createEligibilityRuleSet object that is null. I have included the request retrieved from the request at the bottom /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("svcutil", "3.0.4506.2152")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true, Namespace = "http://RulesEngineServicesLibrary/RulesEngineServices")] public partial class createEligibilityRuleSet { private CreateEligibilityRuleSetSvcRequest requestField; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified, IsNullable = true, Order = 0)] public CreateEligibilityRuleSetSvcRequest request { get { return this.requestField; } set { this.requestField = value; } } } /// <remarks/> [System.CodeDom.Compiler.GeneratedCodeAttribute("svcutil", "3.0.4506.2152")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace = "http://RulesEngineServicesLibrary")] public partial class CreateEligibilityRuleSetSvcRequest : RulesEngineServicesSvcRequest { private string requestField; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified, Order = 0)] public string request { get { return this.requestField; } set { this.requestField = value; } } } [System.Xml.Serialization.XmlIncludeAttribute(typeof(CreateEligibilityRuleSetSvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(ApplyMemberEligibilitySvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(CreateCompletionCriteriaRuleSetSvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(CopyRuleSetSvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(DeleteRuleSetByIDSvcRequest))] [System.CodeDom.Compiler.GeneratedCodeAttribute("svcutil", "3.0.4506.2152")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace = "http://RulesEngineServicesLibrary")] public partial class RulesEngineServicesSvcRequest : ServiceRequest { } /// <remarks/> [System.Xml.Serialization.XmlIncludeAttribute(typeof(RulesEngineServicesSvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(CreateEligibilityRuleSetSvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(ApplyMemberEligibilitySvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(CreateCompletionCriteriaRuleSetSvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(CopyRuleSetSvcRequest))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(DeleteRuleSetByIDSvcRequest))] [System.CodeDom.Compiler.GeneratedCodeAttribute("svcutil", "3.0.4506.2152")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace = "http://FELibrary")] public partial class ServiceRequest { private string applicationIdField; /// <remarks/> [System.Xml.Serialization.XmlElementAttribute(Form = System.Xml.Schema.XmlSchemaForm.Unqualified, Order = 0)] public string applicationId { get { return this.applicationIdField; } set { this.applicationIdField = value; } } } Request from client comes on Message body as below. Retrieved from Message at runtime. <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:rul="http://RulesEngineServicesLibrary/RulesEngineServices"> <soap:Header/> <soap:Body> <rul:createEligibilityRuleSet> <request> <applicationId>test</applicationId> <request>Perf Rule Set1</request> </request> </rul:createEligibilityRuleSet> </soap:Body> </soap:Envelope>

    Read the article

  • Reading serialised object from file

    - by nico
    Hi everyone. I'm writing a little Java program (it's an ImageJ plugin, but the problem is not specifically ImageJ related) and I have some problem, most probably due to the fact that I never really programmed in Java before... So, I have a Vector of Vectors and I'm trying to save it to a file and read it. The variable is defined as: Vector <Vector <myROI> > ROIs = new Vector <Vector <myROI> >(); where myROI is a class that I previously defined. Now, to write the vector to a file I use: void saveROIs() { SaveDialog save = new SaveDialog("Save ROIs...", imp.getTitle(), ".xroi"); String name = save.getFileName(); if (name == null) return; String dir = save.getDirectory(); try { FileOutputStream fos = new FileOutputStream(dir+name); ObjectOutputStream oos = new ObjectOutputStream(fos); oos.writeObject(ROIs); oos.close(); } catch (Exception e) { IJ.log(e.toString()); } } This correctly generates a binary file containing (I suppose) the object ROIs. Now, I use a very similar code to read the file: void loadROIs() { OpenDialog open = new OpenDialog("Load ROIs...", imp.getTitle(), ".xroi"); String name = open.getFileName(); if (name == null) return; String dir = open.getDirectory(); try { FileInputStream fin = new FileInputStream(dir+name); ObjectInputStream ois = new ObjectInputStream(fin); ROIs = (Vector <Vector <myROI> >) ois.readObject(); // This gives error ois.close(); } catch (Exception e) { IJ.log(e.toString()); } } But this function does not work. First, I get a warning: warning: [unchecked] unchecked cast found : java.lang.Object required: java.util.Vector<java.util.Vector<myROI>> ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ I Googled for that and see that I can suppress by prepending @SuppressWarnings("unchecked"), but this just makes things worst, as I get an error: <identifier> expected ROIs = (Vector <Vector <myROI> >) ois.readObject(); ^ In any case, if I omit @SuppressWarnings and ignore the warning, the object is not read and an exception is thrown java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: myROI Again Google tells me myROI needs to implements Serializable. I tried just adding implements Serializable to the class definition, but it is not sufficient. Can anyone give me some hints on how to procede in this case? Also, how to get rid of the typecast warning?

    Read the article

  • Using MVC2 to update an Entity Framework v4 object with foreign keys fails

    - by jbjon
    With the following simple relational database structure: An Order has one or more OrderItems, and each OrderItem has one OrderItemStatus. Entity Framework v4 is used to communicate with the database and entities have been generated from this schema. The Entities connection happens to be called EnumTestEntities in the example. The trimmed down version of the Order Repository class looks like this: public class OrderRepository { private EnumTestEntities entities = new EnumTestEntities(); // Query Methods public Order Get(int id) { return entities.Orders.SingleOrDefault(d => d.OrderID == id); } // Persistence public void Save() { entities.SaveChanges(); } } An MVC2 app uses Entity Framework models to drive the views. I'm using the EditorFor feature of MVC2 to drive the Edit view. When it comes to POSTing back any changes to the model, the following code is called: [HttpPost] public ActionResult Edit(int id, FormCollection formValues) { // Get the current Order out of the database by ID Order order = orderRepository.Get(id); var orderItems = order.OrderItems; try { // Update the Order from the values posted from the View UpdateModel(order, ""); // Without the ValueProvider suffix it does not attempt to update the order items UpdateModel(order.OrderItems, "OrderItems.OrderItems"); // All the Save() does is call SaveChanges() on the database context orderRepository.Save(); return RedirectToAction("Details", new { id = order.OrderID }); } catch (Exception e) { return View(order); // Inserted while debugging } } The second call to UpdateModel has a ValueProvider suffix which matches the auto-generated HTML input name prefixes that MVC2 has generated for the foreign key collection of OrderItems within the View. The call to SaveChanges() on the database context after updating the OrderItems collection of an Order using UpdateModel generates the following exception: "The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted." When debugging through this code, I can still see that the EntityKeys are not null and seem to be the same value as they should be. This still happens when you are not changing any of the extracted Order details from the database. Also the entity connection to the database doesn't change between the act of Getting and the SaveChanges so it doesn't appear to be a Context issue either. Any ideas what might be causing this problem? I know EF4 has done work on foreign key properties but can anyone shed any light on how to use EF4 and MVC2 to make things easy to update; rather than having to populate each property manually. I had hoped the simplicity of EditorFor and DisplayFor would also extend to Controllers updating data. Thanks

    Read the article

  • How much abstraction is too much?

    - by Daniel Bingham
    In an Object Oriented Program: How much abstraction is too much? How much is just right? I have always been a nuts and bolts kind of guy. I understood the concept behind high levels of encapsulation and abstraction, but always felt instinctively that adding too much would just confuse the program. I always tried to shoot for an amount of abstraction that left no empty classes or layers. And where in doubt, instead of adding a new layer to the hierarchy, I would try and fit something into the existing layers. However, recently I've been encountering more highly abstracted systems. Systems where everything that could require a representation later in the hierarchy gets one up front. This leads to a lot of empty layers, which at first seems like bad design. However, on second thought I've come to realize that leaving those empty layers gives you more places to hook into in the future with out much refactoring. It leaves you greater ability to add new functionality on top of the old with out doing nearly as much work to adjust the old. The two risks of this seem to be that you could get the layers you need wrong. In this case one would wind up still needing to do substantial refactoring to extend the code and would still have a ton of never used layers. But depending on how much time you spend coming up with the initial abstractions, the chance of screwing it up, and the time that could be saved later if you get it right - it may still be worth it to try. The other risk I can think of is the risk of over doing it and never needing all the extra layers. But is that really so bad? Are extra class layers really so expensive that it is much of a loss if they are never used? The biggest expense and loss here would be time that is lost up front coming up with the layers. But much of that time still might be saved later when one can work with the abstracted code rather than more low level code. So when is it too much? At what point do the empty layers and extra "might need" abstractions become overkill? How little is too little? Where's the sweet spot? Are there any dependable rules of thumb you've found in the course of your career that help you judge the amount of abstraction needed?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >