Search Results

Search found 56342 results on 2254 pages for 'versant object database'.

Page 523/2254 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • tacacs+ integrated with LDAP or database. Which is better?

    - by chingupt
    We are setting up TACACS+ in our network which is a mix of Cisco AP's and other brands. However we have a centralized managemnet system which allows our customers to configure services. Hence we would like to setup a tacacs+ server integrated with some central system. We have two options: Integrate with a central Database server which stores the user configuration. OR Integrate with a LDAP Server. Which is a better solution? Can you please suggest the pros and cons of using LDAP or Database? TIA Sachin

    Read the article

  • Does LDAP fit the role of a user database for an application?

    - by Spredzy
    I (my company) run a webservice that integrates pieces of few entreprisey-level software. Most of them offer different type of authentication but all offers at least LDAP. I was wondering if storing my application users directly in an LDAP directory would be a good idea. This way all the application I am using could rely on it for authentication purpose. I am aware that LDAP is not a database per se, but it is a datastore. I am also aware that there is no kind of constraints thus deleting a user on the LDAP directory won't do anything on my actual data, but this case would be taken care of with an extra process. My main question here is : is there any reason why I shouldn't use LDAP as my users database ?

    Read the article

  • WIth more mobile users, my geo ip database is becoming useless.

    - by Marius
    Hello there, I've been enjoying the benefits of Geo IP lookup from database for some time. Its great. People are increasingly trying to access my site from a mobile phones or 3G modems, and their physical location seems to have little relation to whereabouts my IP lookup tells me they are. A user who is on the east cost of my country, may be looked up as being in the far inland, or up north. And one user may be reported as being in one location in one moment, and seconds later, be 100s of kilometers away. This is becoming a problem, and I need to find a solution. I am already updating my database monthly, but it has little effect. What can be done? Thank you for your time. Kind regardsMarius

    Read the article

  • Problem accessing MICROSOFT##SSEE database (Error: 18456, Severity: 14, State: 16.)

    - by Philipp Schmid
    After an unexpected server shutdown due to a power failure, I can no longer connect to the internal windows database MICROSOFT##SSEE which is hosting Central Admin for my SBS 2008 server. The log shows: Error: 18456, Severity: 14, State: 16. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. [CLIENT: <named pipe>] I've tried to connect using the SQL Management studio (connecting to .pipemssql$microsoft##sseesqlquery) but no luck. The SQL Server Configuration Manager doesn't show a entry for 'Protocols for MICROSOFT##SSEE' (but shows it for 2 other database hosted on the same SQL server 2005 Express edition. I have tried to restore the master.ldf and mastlog.log files from a backup, but the issue persists.

    Read the article

  • Database backup regardless of backup made through a control panel?

    - by developer
    I know that all CMS or CMC platforms have some sort of walkthrough for their users to fully backup and restore the database or the whole website. While we all can perform such backups (and there are even plugins which automate the whole procedure) and restore them in necessity (such as when we migrate to a new server), one can also backup the whole website by means of such control panels as Directadmin or Cpanel. Now, I just want to know if it is necessary for us to do database or website backups the way they are described by a specific CMC or CMS developer even after we perform a whole website backup in a control panel such as DA? An example of such CMS platforms is moodle. Moodle Docs describes how we can backup and restore moodle in here. So do we, out of necessity, have to make the backup the way described, or we can simply do it the Control panel way? Thanks

    Read the article

  • Winamp trashed its local media library database when I blocked it on my firewall -- Is this supposed to happen?

    - by Hamster
    Frankly, I don't see a compelling reason why Winamp needs to be using my network, as all I do is listen to songs from my local media. However, it appears to exact revenge when I try to ensure it can't on my firewall by completely eradicating my media library list (which includes all my song ratings and such). I was not able to recover this data, as I hadn't exported the database prior, and the actual database file was completely wiped. My other playlists and the actual media itself appear not to be effected, though. Is this supposed to happen, by chance? Edit: I wasn't able to get it to replicate this behavior with a different firewall+Winamp version. Hmm.

    Read the article

  • WSUS moved how to reset the database and check the folders?

    - by Matthew Zielonka.co.uk
    I have a bit of a problem with a WSUS box, I backed up the WsusContent folder (about 180 gig) to a second machine, wiped the first machine then realized I don't have the database folder! I can not find any articles / guidance on this, I have re installed WSUS however it does not find the other files which where previsously downloaded (I guess the database being the reason). If I do a reset it will force it to download the ones it has again I guess and does not check what is still to be downloaded? Sigh... love / hate WSUS at the moment with a passion :) Thanks for any help.

    Read the article

  • How to migrate Lotus Notes Mail in database to Exchange public folder?

    - by elsni
    I need to migrate a Lotus Notes Mail database to Exchange. In the past, Users get mails in their Notes mail accounts and sort them manually by drag and drop in a specific folder structure in a seperate Notes mail database. This should be mapped to Exchange. I considered using outlook and a sharepoint Library mapped to outlook, but Outlook does not support drag and drop from the mail account to a standard doclib and the discussion libs do not support folders. So I think it's the easiest way to use an Exchange public folder instead of a sharepoint lib, which should work as in Notes (correct me if I'm wrong). But how I do migrate the old Notes db to the public folder including all subfolders? Thank you!

    Read the article

  • Get percentage free space on database volumes w/ SQL Server 2005?

    - by Allen
    I am currently using SQL Server 2005 and (undocumented I believe) master..xp_fixeddrives to get free space on my database volumes as part of my monitoring. However, this only gives me an absolute number of MB free. What I really need is percentage free. Is there another way in SQL Server 2005 to get this? If not, is there some other light-weight way to get it? If I can, I want to avoid installing a Java JRE, or Perl, or Python on my database server. Perhaps vbscript, or a small Windows executable on the file system? Yes, I know I can Google this, and I have. It looks like there are a few ways to accomplish it, and I'm curious how my DBA brethren have handled this.

    Read the article

  • Connection between Windows Web Server in DMZ and Windows Application Server and MS SQL Database Server within LAN [closed]

    - by user1345260
    Excuse me for being naive. But I'm a newbie. We have a Windows Web Server within DMZ and a Windows Application Server and MS SQL Database Server. What ports do we open for connection between them? For example: Someone opens the Data Driven Website on the Web Server and they should be able see the data that the website is trying to access from the Database. Currently, I'm planning to open the following ports on the Web Server to establish the access, 80 HTTP 443 HTTPS 21 FTP 3389 RDP 53 DNS 1433 MS SQL Server Please validate my assumption. I would highly appreciate your help. Also, please provide me any articles on this topic so that I can read.

    Read the article

  • What questions do I need to ask for a database sync?

    - by user65745
    I am currently helping to implement an RFID inventory management system for my company. The software that we are locked into has been at best buggy and unreliable. The software provider is now rolling out a major release. My problem is that the new software release keeps a local database on each machine that then syncs to a master database online. According to the software company we cannot do a scaled rollout because of data corruption issues between the software releases. What questions can I be asking and what sort of testing can I do on my end to make sure this software works? Any suggestions would be very helpful.

    Read the article

  • Problem accessing MICROSOFT##SSEE database (Error: 18456, Severity: 14, State: 16.)

    - by Philipp Schmid
    After an unexpected server shutdown due to a power failure, I can no longer connect to the internal windows database MICROSOFT##SSEE which is hosting Central Admin for my SBS 2008 server. The log shows: Error: 18456, Severity: 14, State: 16. Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. [CLIENT: <named pipe>] I've tried to connect using the SQL Management studio (connecting to .pipemssql$microsoft##sseesqlquery) but no luck. The SQL Server Configuration Manager doesn't show a entry for 'Protocols for MICROSOFT##SSEE' (but shows it for 2 other database hosted on the same SQL server 2005 Express edition. I have tried to restore the master.ldf and mastlog.log files from a backup, but the issue persists.

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Game Changer Appliance for SMBs Powered by Oracle Linux

    - by Zeynep Koch
    In the November 28th CRN article  Review: Thumbs-Up On Oracle Database Appliance  , Edward F. Moltzen mentions that "The Test Center likes this appliance (Oracle Database Appliance) , for the performance and for the strong security offered by the underlying Oracle Linux in the box. It’s more than a solid offering for the SMB space; it’s potentially a game-changer as data and security needs race to keep up with the oncoming generations of technology." The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g—in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and network that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. All hardware and software components are supported by a single vendor—Oracle—and offer customers unique pay-as-you-grow software licensing to quickly scale from 2 processor cores to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades. It is: Simple—Complete plug-and-go hardware and software Reliable—Advanced management features and single-vendor support Affordable—Pay-as-you-grow platform for small database consolidation The Oracle Database Appliance is a 4U rack-mountable system pre-installed with Oracle Linux and Oracle appliance manager software. Redundancy is built into all components and the Oracle appliance manager software reduces the risk and complexity of deploying highly available databases. It's perfect for consolidating OLTP and data warehousing databases up to 4 terabytes in size, making it ideal for midsize companies or departmental systems. Read more about Oracle's Database Appliance  Read more about Oracle Linux

    Read the article

  • SQL Authority News – Download and Install Adventure Works 2014 Sample Databases

    - by Pinal Dave
    If you are using SQL Server there are good chances that you are familiar with AdventureWorks. AdventureWorks is a Sample Database shipped with SQL Server and it can be downloaded from CodePlex site. AdventureWorks have replaced Northwind and Pubs from the sample database in SQL Server 2005. The Microsoft team keeps updating the sample database as they release new versions. I use the AdventureWorks database for most of my example, as it is easy to use sample database which is accessible for most of the people out there. Every new version  of SQL Server should have its own Adventureworks database. The reason is that SQL Server comes up with new features with every version and most of the new features need a new dataset sample to demonstrate the capabilities of the features. This is the why every version of SQL Server has its own AdventureWorks database. SQL Server 2014 has many new features and to support that Microsoft has released new Advetureworks 2014 Sample Database. You can download Adventure Works 2014 Sample Databases from here. Here is a quick tutorial how one can install the AdventureWorks database on your server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Difference between Detach/Attach and Restore/BackUp a DB

    - by SAMIR BHOGAYTA
    Transact-SQL BACKUP/RESTORE is the normal method for database backup and recovery. Databases can be backed up while online. The backup file size is usually smaller than the database files since only used pages are backed up. Also, in the FULL or BULK_LOGGED recovery model, you can reduce potential data loss by performing transaction log backups. Detaching a database removes the database from SQL Server while leaving the physical database files intact. This allows you to rename or move the physical files and then re-attach. Although one could perform cold backups using this technique, detach/attach isn't really intended to be used as a backup/recovery process. Commonly it is recommended that you use BACKUP/RESTORE for disaster recovery (DR) scenario and copying data from one location to another. But this is not absolute, sometimes for a very large database, if you want to move it from one location to another, backup/restore process may spend a lot of time which you do not like, in this case, detaching/attaching a database is a better way since you can attach a workable database very fast. But you need to aware that detaching a database will bring it offline for a short time and detaching/attaching does not provide DR function. For more information about detaching and attaching databases, you can refer to: Detaching and Attaching Databases http://technet.microsoft.com/en-us/library/ms190794.aspx

    Read the article

  • java UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid [closed]

    - by Philippe
    I want use Unitils for testing my JAVA program, i have following error message "UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid" My table is into an DDL file, and table is not created Could you says me if dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema properties is still active in unitils 3.3 ? My EMPLOYEE table is not created and DBMAINTAIN_SCRIPTS not created too Where is my mistake ? My DDL file SET REFERENTIAL_INTEGRITY FALSE; SET DATABASE COLLATION "French"; SET SCHEMA PUBLIC; CREATE TABLE DBMAINTAIN_SCRIPTS (FILE_NAME VARCHAR2(150), FILE_LAST_MODIFIED_AT INTEGER, CHECKSUM VARCHAR2(50), EXECUTED_AT VARCHAR2(20), SUCCEEDED INTEGER); CREATE TABLE EMPLOYEES(ID IDENTITY NOT NULL,NAME VARCHAR(20),TITLE VARCHAR(20),SALARY DOUBLE,NI INTEGER NOT NULL) My unitils properties file # comments documenting these unitils configuration properties removed for # brevity. look for commenting in unitils-default.properties in the root of the # unitils jar if needed. unitils.modules=database,dbunit,easymock,inject unitils.module.hibernate.enabled=false unitils.module.spring.enabled=false # these placeholders are set in avaje.properties #gere la configuration DBUNIT database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:unitils-example database.schemaNames=PUBLIC database.userName=SA database.password= database.dialect=hsqldb # unitils will construct the test database using the ddl file found in this # directory dbMaintainer.fileScriptSource.scripts.location=src/main/resources updateDataBaseSchema.enabled=true sequenceUpdater.sequencevalue.lowestacceptable=100 dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema #dbMaintainer.autoCreateExecutedScriptsTable property to true

    Read the article

  • responsibility for storage

    - by Stefano Borini
    A colleague and I were brainstorming about where to put the responsibility of an object to store itself on the disk in our own file format. There are basically two choices: object.store(file) fileformatWriter.store(object) The first one gives the responsibility of serialization on the disk to the object itself. This is similar to the approach used by python pickle. The second groups the representation responsibility on a file format writer object. The data object is just a plain data container (eventually with additional methods not relevant for storage). We agreed on the second methodology, because it centralizes the writing logic from generic data. We also have cases of objects implementing complex logic that need to store info while the logic is in progress. For these cases, the fileformatwriter object can be passed and used as a delegate, calling storage operations on it. With the first pattern, the complex logic object would instead accept the raw file, and implement the writing logic itself. The first method, however, has the advantage that the object knows how to write and read itself from any file containing it, which may also be convenient. I would like to hear your opinion before starting a rather complex refactoring.

    Read the article

  • Engineering as a Service

    - by jgelhaus
    Oracle Exadata Database Machine is known for great compute performance, and over the past few years, it has also become known as a great platform for any type of Oracle Database workload, from data warehousing to online transaction processing (OLTP). But now organizations are turning to Oracle Exadata for business efficiencies and private cloud solutions—for consolidation and database as a service (DBaaS). University of Minnesota For an inside look at how DBaaS is working in the real world, it’s worth checking into the University of Minnesota’s database hotel.  With more than 50,000 students, the University of Minnesota in Minneapolis is one of the largest universities in the United States. The university’s centralized IT group not only has to support all those students but also must provide support and services to more than 40 departments and colleges within the university. They have two Exadata Database Machine X2-2 half-rack systems from Oracle, with four database nodes each and roughly 30 terabytes of usable disk space for each of the Oracle Exadata systems. The university is using Oracle Real Application Clusters (Oracle RAC) for high availability and the Data Guard feature of Oracle Database, Enterprise Edition, for disaster recovery capabilities. The deployment has been live in production since May 2011. Overhead Door When it comes to overhead, revolving, sliding, or other specialty residential and commercial doors, Overhead Door is the worldwide leader. But when they needed to open doors with their customers through a better, faster, and more agile IT infrastructure, Overhead Door turned to Oracle and Oracle Exadata. Oracle Exadata Database Machine plays an important part in Overhead Door’s IT and business strategy. The organization has two Exadata Database Machine X2-2s deployed, one in production and one in development and testing Read the full Oracle Magazine article Engineering as a Service

    Read the article

  • Oracle Flashback Technology - Webcast 9th June 2010

    - by Alex Blyth
    Hi All Here are the details for webcast on Oracle Flashback Technologies on Wednesday (9th June 2010) beginning at 1.30pm (Sydney, Australia Time). The Oracle Database architecture leverages the unique technological advances in the area of database recovery due to human errors. Oracle Flashback Technology provides a set of new features to view and rewind data back and forth in time. The Flashback features offer the capability to query historical data, perform change analysis, and perform self-service repair to recover from logical corruptions while the database is online. With Oracle Flashback Technology, you can indeed undo the past! Oracle9i introduced Flashback Query to provide a simple, powerful and completely non-disruptive mechanism for recovering from human errors. It allows users to view the state of data at a point in time in the past without requiring any structural changes to the database. Oracle Database 10g extended the Flashback Technology to provide fast and easy recovery at the database, table, row, and transaction level. Flashback Technology revolutionizes recovery by operating just on the changed data. The time it takes to recover the error is now equal to the same amount of time it took to make the mistake. Oracle 10g Flashback Technologies includes Flashback Database, Flashback Table, Flashback Drop, Flashback Versions Query, and Flashback Transaction Query. Flashback technology can just as easily be utilized for non-repair purposes, such as historical auditing with Flashback Query and undoing test changes with Flashback Database. Oracle Database 11g introduces an innovative method to manage and query long-term historical data with Flashback Data Archive. This release also provides an easy, one-step transaction backout operation, with the new Flashback Transaction capability. Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6690835Conference Key: flashbackEnrollment is required. Please click here to enroll.Please use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call) Audio details: NZ Toll Free - 0800 888 157 orAU Toll Free - 1800420354 (or +61 2 8064 0613)Meeting ID: 7914841Meeting Passcode: 09062010 Talk to you all Wednesday 9th June Alex

    Read the article

  • Calculating a child Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor(just for fun) anyway I have problem that I had several days trying to resolve but I have been unsuccessful. Here goes... I have an object "A": Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) And an object "B": Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) I now make a parent/child relationship. "B" is a child of "A": A |--B When I do the relationship I need to re-calculate the Child("B") Position, Rotation and Scale such that it maintains its current position, rotation and scale(Location in world). So for child position "B" it would now be (-2, -2, -2) since now "A" it is center and (-2, -2, -2) will keep the object in its same position. I think I got the Position and scale figure out, but rotation I cant. So I was trying to figure out what to do and what I did is opened Unity and run the same example and I did noticed that when making an abject a child object the child object did not moved at all but had its Position, Rotation and Scale values changed(Related to the parent). For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When making it a parent child relation("B" is a child of "A") the child object("B") in its Rotation values now has: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same values to my editor(To the child "B" rotation X, Y, Z values) the object does not move at all. So I basically need to know how did Unity arrive at those rotation values for the child(What are the calculations?). If you can help and put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but with the Rotation I really need help. Thanks!

    Read the article

  • Is it OK to introduce methods that are used only during unit tests?

    - by Mchl
    Recently I was TDDing a factory method. The method was to create either a plain object, or an object wrapped in a decorator. The decorated object could be of one of several types all extending StrategyClass. In my test I wanted to check, if the class of returned object is as expected. That's easy when plain object os returned, but what to do when it's wrapped within a decorator? I code in PHP so I could use ext/Reflection to find out a class of wrapped object, but it seemed to me to be overcomplicating things, and somewhat agains rules of TDD. Instead I decided to introduce getClassName() that would return object's class name when called from StrategyClass. When called from the decorator however, it would return the value returned by the same method in decorated object. Some code to make it more clear: interface StrategyInterface { public function getClassName(); } abstract class StrategyClass implements StrategyInterface { public function getClassName() { return \get_class($this); } } abstract class StrategyDecorator implements StrategyInterface { private $decorated; public function __construct(StrategyClass $decorated) { $this->decorated = $decorated; } public function getClassName() { return $this->decorated->getClassName(); } } And a PHPUnit test /** * @dataProvider providerForTestGetStrategy * @param array $arguments * @param string $expected */ public function testGetStrategy($arguments, $expected) { $this->assertEquals( __NAMESPACE__.'\\'.$expected, $this->object->getStrategy($arguments)->getClassName() ) } //below there's another test to check if proper decorator is being used My point here is: is it OK to introduce such methods, that have no other use than to make unit tests easier? Somehow it doesn't feel right to me.

    Read the article

  • ???:2013?10??PSU???!

    - by ??
    ???:2013?10??PSU???!10?16? Oracle ??????PSU/SPU(CPU)????,????????????:??11.2.0.4 ?????8??????,?????PSU????11.2.0.4???,???????PSU??2014?1???? 12.1.0.1??????PSU(12.1.0.1.1)??GI?PSU;   ?????PUS???EXADATA???-GI?PSU???EXADATA???;  ????????PDB?????,???????????????;  ?12c????,SPU??????????,??????PSU,??????????????;11.2.0.2.12?11.2.0.2?????PSU      ???????????"Patch Set Update and Critical Patch Update October 2013 Availability Document" (Doc ID 1571391.1)?,     ??????:"3.1.4.4 Oracle Database 11.2.0.2", ? 11 ????????;     ?????????Oracle????????????,????????     ??"Release Schedule of Current Database Releases" (Doc ID 742060.1) ???;     ??????PSU????????11.2.0.2?PSU?????,??????????????;11.2.0.2.12 ? 11.1.0.7.17 PSU ??????????????,???????????????.10.2.0.5.13 ?PSU????,??????????????     10.2(.0.5)?????????(Extended Support)???7????,?????10.2.0.5 PSU?????;     ?????????,Oracle????"limited extended support"???,Limited Extended Support ????????????????,     ???????????PSU?????     ???????"limited extended support" ????????????,???????????????????PSU,     ??????????????PSU?,????PSU???,?????oracle License???????????10.2.0.4 ??,??????????PSU???;????PSU???,??????MOS??:Note: 1571655.1 Critical Patch Update October 2013 Database Known IssuesNote:1571653.1 Critical Patch October 2013 Database Patch Security Vulnerability Molecule MappingNote:1571731.1 Oracle Grid Infrastructure Patch Set Update 12.1.0.1.1 Known IssuesNote:1571652.1 Oracle Grid Infrastructure Patch Set Update 11.2.0.3.8 Known IssuesNote:1571651.1 Oracle Database Patch Set Update 12.1.0.1.1 Known IssuesNote:1571650.1 Oracle Database Patch Set Update 11.2.0.3.8 Known IssuesNote:1571649.1 Oracle Database Patch Set Update 11.2.0.2.12 Known IssuesNote:1571647.1 Oracle Database Patch Set Update 11.1.0.7.17 Known IssuesNote:1571645.1 Oracle Database Patch Set Update 10.2.0.5.13 Known IssuesNote:1227443.1 Patch Set Updates Known Issues Notes

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >