Search Results

Search found 3942 results on 158 pages for 'stick it to the man'.

Page 40/158 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Best practices for class-mapping with SoapClient

    - by Foofy
    Using SoapClient's class mapping feature and it's pretty sweet. Unfortunately the SOAP service we're using has a bunch of read-only properties on some of the objects and will throw faults if the properties are passed back as anything but null. Need to filter out the properties before they're used in the SOAP call and am looking for advice on the best way to do it. So far the options are: Stick to a convention where I use getter and setter functions to manipulate the properties, and use property overloading to filter method access since only SoapClient would be doing that. E.g. developers would access properties like this: $obj->getAccountNumber() SoapClient would access properties like this: $obj->accountNumber I don't like this because the properties are still exposed and things could go wrong if developers don't stick to convention. Have a wrapper for SoapClient that sets a public property the mapped objects can check to see if the property is being accessed by SoapClient. I already have a wrapper that assigns a reference to itself to all the mapped objects. class SoapClientWrapper { public function __soapCall($method, $args) { $this->setSoapMode(true); $this->_soapClient->__soapCall($method, $args); $this->setSoapMode(false); } } class Invoice { function __get($val) { if($this->_soapClient->getSoapMode()) { return null; } else { return $this->$val; } } } This works but it doesn't feel right and seems a bit clunky. Do the mapping manually, and don't use SoapClient's mapping features. I'd just have a function on all the mapped objects that returns the safe-to-send properties. Also, nobody would have access to properties they shouldn't since I could enforce getters and setters. A lot more work, though.

    Read the article

  • Resizing video best practices (frame size)

    - by undefined
    I have read the following which is from Best Practices for Encoding Video with the VP6 Codec on the Adobe website here - http://www.adobe.com/devnet/flash/articles/encoding_video_print.html. It is talking about common video ratios (320x240, 640x480) Although these ratios are standard, and should be used to avoid distorting the video, the size of the encoded video is not set in stone. The original web video sizes used heights and widths that were evenly divisible by 16. This was mandatory for many early codecs. Although this is not necessary for modern codecs, you should stick to even heights and widths. What do they mean by 'even heights and widths'. I am thinking about encoding my video at 400x300 to make it slightly bigger, this is still 4x3 format but should I just stick at 320x240 and resize it on the screen? Clearly there are benefits to this in terms of storage size and delivery costs. In some places on my site I want to show the video at 400x300 but in others I want it to play full screen so this is why I am wondering if a larger original size (400x300) will give better results when blown up. Any thoughts?

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • Why might stable_sort be affecting my hashtable values?

    - by zebraman
    I have defined a struct ABC to contain an int ID, string NAME, string LAST_NAME; My procedure is this: Read in a line from an input file. Parse each line into first name and last name and insert into the ABC struct. Also, the struct's ID is given by the number of the input line. Then, push_back the struct into a vector masterlist. I am also hashing these to a hashtable defined as vector< vector , using first name and last name as keywords. That is, If my data is: Garfield Cat Snoopy Dog Cat Man Then for the keyword cash hashes to a vector containing Garfield Cat and Cat Man. I insert the structs into the hashtable using push_back again. The problem is, when I call stable_sort on my masterlist, my hashtable is affected for some reason. I thought it might be happening since the songs are being ordered differently, so I tried making a copy of the masterlist and sorting it and it still affects the hashtable though the original masterlist is unaffected. Any ideas why this might be happening?

    Read the article

  • documenting class properties

    - by intuited
    I'm writing a lightweight class whose properties are intended to be publicly accessible, and only sometimes overridden in specific instantiations. There's no provision in the Python language for creating docstrings for class properties, or any sort of properties, for that matter. What is the accepted way, should there be one, to document these properties? Currently I'm doing this sort of thing: class Albatross(object): """A bird with a flight speed exceeding that of an unladen swallow. Properties: """ flight_speed = 691 __doc__ += """ flight_speed (691) The maximum speed that such a bird can attain """ nesting_grounds = "Throatwarbler Man Grove" __doc__ += """ nesting_grounds ("Throatwarbler Man Grove") The locale where these birds congregate to reproduce. """ def __init__(**keyargs): """Initialize the Albatross from the keyword arguments.""" self.__dict__.update(keyargs) Although this style doesn't seem to be expressly forbidden in the docstring style guidelines, it's also not mentioned as an option. The advantage here is that it provides a way to document properties alongside their definitions, while still creating a presentable class docstring, and avoiding having to write comments that reiterate the information from the docstring. I'm still kind of annoyed that I have to actually write the properties twice; I'm considering using the string representations of the values in the docstring to at least avoid duplication of the default values. Is this a heinous breach of the ad hoc community conventions? Is it okay? Is there a better way? For example, it's possible to create a dictionary containing values and docstrings for the properties and then add the contents to the class __dict__ and docstring towards the end of the class declaration; this would alleviate the need to type the property names and values twice. I'm pretty new to python and still working out the details of coding style, so unrelated critiques are also welcome.

    Read the article

  • constructor function's object literal returns toString() method but no other method

    - by JohnMerlino
    I'm very confused with javascript methods defined in objects and the "this" keyword. In the below example, the toString() method is invoked when Mammal object instantiated: function Mammal(name){ this.name=name; this.toString = function(){ return '[Mammal "'+this.name+'"]'; } } var someAnimal = new Mammal('Mr. Biggles'); alert('someAnimal is '+someAnimal); Despite the fact that the toString() method is not invoked on the object someAnimal like this: alert('someAnimal is '+someAnimal.toString()); It still returns 'someAnimal is [Mammal "Mr. Biggles"]' . That doesn't make sense to me because the toString() function is not being called anywhere. Then to add even more confusion, if I change the toString() method to a method I make up such as random(): function Mammal(name){ this.name=name; this.random = function(){ return Math.floor(Math.random() * 15); } } var someAnimal = new Mammal('Mr. Biggles'); alert(someAnimal); It completely ignores the random method (despite the fact that it is defined the same way was the toString() method was) and returns: [object object] Another issue I'm having trouble understanding with inheritance is the value of "this". For example, in the below example function person(w,h){ width.width = w; width.height = h; } function man(w,h,s) { person.call(this, w, h); this.sex = s; } "this" keyword is being send to the person object clearly. However, does "this" refer to the subclass (man) or the super class (person) when the person object receives it? Thanks for clearing up any of the confusion I have with inheritance and object literals in javascript.

    Read the article

  • Select all points in a matrix within 30m of another point

    - by pinnacler
    So if you look at my other posts, it's no surprise I'm building a robot that can collect data in a forest, and stick it on a map. We have algorithms that can detect tree centers and trunk diameters and can stick them on a cartesian XY plane. We're planning to use certain 'key' trees as natural landmarks for localizing the robot, using triangulation and trilateration among other methods, but programming this and keeping data straight and efficient is getting difficult using just Matlab. Is there a technique for sub-setting an array or matrix of points? Say I have 1000 trees stored over 1km (1000m), is there a way to say, select only points within 30m radius of my current location and work only with those? I would just use a GIS, but I'm doing this in Matlab and I'm unaware of any GIS plugins for Matlab. I forgot to mention, this code is going online, meaning it's going on a robot for real-time execution. I don't know if, as the map grows to several miles, using a different data structure will help or if calculating every distance to a random point is what a spatial database is going to do anyway. I'm thinking of mirroring two arrays, one sorted by X and the other by Y. Then bubble sorting to determine the 30m range in that. I do the same for both arrays, X and Y, and then have a third cross link table that will select the individual values. But I don't know, what that's called, how to program that and I'm sure someone already has so I don't want to reinvent the wheel. Cartesian Plane GIS

    Read the article

  • JTA or LOCAL transactions in JPA2+Hibernate 3.6.0?

    - by Pangea
    We are in the process of re-thinking our tech stack and below are our choices (We can't live without Spring and Hibernate due to the complexity etc of the app). We are also moving from J2EE 1.4 to JEE 5. Tech stack JEE 5 JPA 2.0 (I know JEE 5 only supports JPA 1.0 but we want to use Hibernate as the JPA provider) Hibernate 3.6.0 (We already have lots of hbm files with custom types etc. so we doesn't want to migrate them at this time to JPA. This means we want both jpa/hbm mappings work together and hence the Hibernate as the JPA provider instead of using the default that comes with App Server) Now the problems is that I want to stick with local transactions but other team members want to use JTA. I have been working with J2EE for last 9 years and I've heard time and again people suggesting to stick with local transactions if I doesn't need two phase commits. This is not only for performance reasons but debugging/troubleshooting a local transaction is lot easier than a distributed transaction. My suggestion is to use spring declarative transaction management + local transactions (HibernateTransactionManager) I want to make sure if I am being paranoid or I have a valid point. I'd like to hear what the rest of the JEE world thinks. Thank you.

    Read the article

  • Is there any reason to use C instead of C++ for embedded development?

    - by Piotr Czapla
    Question I have two compilers on my hardware C++ and C89 I'm thinking about using C++ with classes but without polymorphism (to avoid vtables). The main reasons I’d like to use C++ are: I prefer to use “inline” functions instead of macro definitions. I’d like to use namespaces as I prefixes clutter the code. I see C++ a bit type safer mainly because of templates, and verbose casting. I really like overloaded functions and constructors (used for automatic casting). Do you see any reason to stick with C89 when developing for very limited hardware (4kb of RAM)? Conclusion Thank you for your answers, they were really helpful! I though the subject through and I will stick with C mainly because: It is easier to predict actual code in C and this is really important if you have only 4kb of ram. My team consists of C developers mainly so advance features of C++ won't be frequently used. I've found a way to inline functions in my C compiler (C89). It is hard to accept one answer as you provided so many good answers. Unfortunately I can't create a wiki and accept it so I will choose one answer that made me think most.

    Read the article

  • retrieve value from hashtable with clone of key; C#

    - by Johnny
    I would like to know if there is any possible way to retrieve an item from a hashtable using a key that is identical to the actual key, but a different object. I understand why it is probably not possible, but I would like to see if there is any tricky way to do it. My problem arises from the fact that, being as stupid as I am, I created hashtables with int[] as the keys, with the integer arrays containing indices representing spatial position. I somehow knew that I needed to create a new int[] every time I wanted to add a new entry, but neglected to think that when I generated spatial coordinate arrays later they would be worthless in retrieving the values from my hashtables. Now I am trying to decide whether to rearrange things so that I can store my values in ArrayLists, or whether to search through the list of keys in the Hashtable for the one I need every time I want to get a value, neither of the options being very cool. Unless of course there is a way to get //1 to work like //2! Thanks in advance. static void Main(string[] args) { Hashtable dog = new Hashtable(); //1 int[] man = new int[] { 5 }; dog.Add(man, "hello"); int[] cat = new int[] { 5 }; Console.WriteLine(dog.ContainsKey(cat)); //false //2 int boy = 5; dog.Add(boy, "wtf"); int kitten = 5; Console.WriteLine(dog.ContainsKey(kitten)); //true; }

    Read the article

  • Expanding DIV slides behind DIV beneath it...

    - by Paddy
    I'm not sure that I'm going to get an answer here, as I'd need to post a lot of CSS and html to get a working recreation, however... I have structure something like this: <fieldset> <legend>Test A</legend> <h3>Test A</h3> <p> Something here. </p> <div style="display:hidden;">I'm dynamically displayed</div> </fieldset> <fieldset> <legend>Test B</legend> <h3>Test B</h3> <p> Something B here. </p> </fieldset> I have code that toggles the display of my hidden div using jQuery and .show(). This works fine in IE8, firefox and Safari, but when I stick IE8 into compatibility mode, then the first fieldset (Test A) will expand, but the expansion happens behind the second fieldset, which doesn't move (i.e. it slides down behind it). I have quite a bit of CSS in use here, and I'm going to have to go back and unpick the whold lot, which isn't a fun idea. If anybody has any idea of one of the IE7 rendering issues that might be affecting this, then I'd very much appreciate it. (note that there is more to the content in these fieldsets than shown, including floated divs). Quick note - if I stick IE7 into quirks mode, it works (but wrecks the rest of my layout) - in standards mode, I get the above behaviour.

    Read the article

  • How to access different domain data using Java script

    - by shoaibmohammed
    Hello there, Here is the issue. Suppose there is a DOMAIN A which is going to be the server containing a PHP Script file. The data from Domain A is to be accessed by a Client at DOMAIN B. I know it cannot be accessed directly using JavaScript. So what I did is, in Domain A I created a a JavaScript file as front-end for the PHP Script which AJAXes the PHP and returns the data. But unfortunately it din't work I came across an example having PHP as a Middle Man in the client side. But I donot want to keep any server side PHP code as a middle man in the client side. I just want to give out the Javascript to the client domain. http://stackoverflow.com/questions/578095/how-to-get-data-with-javascript-from-another-server DOMAIN A PHP - data.php <?php echo "Server returns data"; ?> JS - example.js Does the Ajax to the PHP function getData() { //assume ajax is done for data.php and data is retrieved, now return the data return ajaxed_data; } Domain B JS Client includes the example.js file from Domain A in his HTML <script type="text/javascript" src="http://www.DomainA.com/example.js"></script> <script type="text/javascript"> alert(getData()); </script> I hope I have made myself understandable ! Can this be established ? Its something like Google friend connect, what I mean is, just provide JavaScript to the client and thats it. Every thing carried out in server side Thankx for providing this forum

    Read the article

  • Utility of List<T>.Sort() versus List<T>.OrderBy() for a member of a custom container class

    - by ccomet
    I've found myself running back through some old 3.5 framework legacy code, and found some points where there are a whole bunch of lists and dictionaries that must be updated in a synchronized fashion. I've determined that I can make this process infinitely easier to both utilize and understand by converging these into custom container classes of new custom classes. There are some points, however, where I came to concerns with organizing the contents of these new container classes by a specific inner property. For example, sorting by the ID number property of one class. As the container classes are primarily based around a generic List object, my first instinct was to write the inner classes with IComparable, and write the CompareTo method that compares the properties. This way, I can just call items.Sort() when I want to invoke the sorting. However, I've been thinking instead about using items = items.OrderBy(Func) instead. This way it is more flexible if I need to sort by any other property. Readability is better as well, since the property used for sorting will be listed in-line with the sort call rather than having to look up the IComparable code. The overall implementation feels cleaner as a result. I don't care for premature or micro optimization, but I like consistency. I find it best to stick with one kind of implementation for as many cases as it is appropriate, and use different implementations where it is necessary. Is it worth it to convert my code to use the LINQ OrderBy instead of using List.Sort? Is it a better practice to stick with the IComparable implementation for these custom containers? Are there any significant mechanical advantages offered by either path that I should be weighing the decision on? Or is their end-functionality equivalent to the point that it just becomes coder's preference?

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • Parameterized SPARQL query with JENA

    - by sandra
    I'm trying to build a small semantic web application using Jena framework, JSP and JAVA. I have a remote SPARQL endpoint and I've already written a simple query which works fine but now I need to use some parameters. Here is my code so far: final static String serviceEndpoint = "http://fishdelish.cs.man.ac.uk/sparql/"; String comNameQuery = "PREFIX fd: <http://fishdelish.cs.man.ac.uk/rdf/vocab/resource/> " + "SELECT ?name ?language ?type" + "WHERE { ?nameID fd:comnames_ComName ?name ;" + "fd:comnames_Language ?language ;" + "fd:comnames_NameType ?type ." + "}"; Query query = QueryFactory.create(comNameQuery); QueryExecution qe = QueryExecutionFactory.sparqlService(serviceEndpoint,query); try { ResultSet rs = qe.execSelect(); if ( rs.hasNext() ) { System.out.println(ResultSetFormatter.asText(rs)); } } catch(Exception e) { System.out.println(e.getMessage()); } finally { qe.close(); } What I want to do is to parameterized ?name. I'm new to Jena and I'm not really sure how to use parameters in a SPARQL query. I would appreciate it if someone could help me with this.

    Read the article

  • How to write a custom predicate for multi_index_containder with composite_key?

    - by Titan
    I googled and searched in the boost's man, but didn't find any examples. May be it's a stupid question...anyway. So we have the famous phonebook from the man: typedef multi_index_container< phonebook_entry, indexed_by< ordered_non_unique< composite_key< phonebook_entry, member<phonebook_entry,std::string,&phonebook_entry::family_name>, member<phonebook_entry,std::string,&phonebook_entry::given_name> >, composite_key_compare< std::less<std::string>, // family names sorted as by default std::greater<std::string> // given names reversed > >, ordered_unique< member<phonebook_entry,std::string,&phonebook_entry::phone_number> > > > phonebook; phonebook pb; ... // look for all Whites std::pair<phonebook::iterator,phonebook::iterator> p= pb.equal_range(boost::make_tuple("White"), my_custom_comp()); How should my_custom_comp() look like? I mean it's clear for me then it takes boost::multi_index::composite_key_result<CompositeKey> as an argumen (due to compilation errors :) ), but what is CompositeKey in that particular case? struct my_custom_comp { bool operator()( ?? boost::multi_index::composite_key_result<CompositeKey> ?? ) const { return blah_blah_blah; } }; Thanks in advance.

    Read the article

  • AIX specific socket programming query

    - by kumar_m_kiran
    Hi All, Question 1 From SUSE man pages, I get the below details for socket connect options If the initiating socket is connection-mode, then connect() shall attempt to establish a connection to the address specified by the address argument. If the connection cannot be established immediately and O_NONBLOCK is not set for the file descriptor for the socket, connect() shall block for up to an unspecified timeout interval until the connection is established. If the timeout interval expires before the connection is established, connect() shall fail and the connection attempt shall be aborted. If connect() is interrupted by a signal that is caught while blocked waiting to establish a connection, connect() shall fail and set errno to [EINTR], but the connection request shall not be aborted, and the connection shall be established asynchronously. Question : Is the above contents valid for AIX OS (especially the connection time-out, timed wait ...etc)?Because I do not see it in AIX man pages (5.1 and 5.3) Question 2 I have a client socket whose attributes are a. SO_RCVTIMEO ,SO_SNDTIMEO are set for 5 seconds. b. AF_INET and SOCK_STREAM. c. SO_LINGER with linger on and time is 5 seconds. d. SO_REUSEADDR is set. Note that the client socket is not O_NONBLOCK. Question : Now since O_NONBLOCK is not set and SO_RCVTIMEO and SO_SNDTIMEO is set for 5 seconds, does it mean a. connect in NON Blocking or Blocking? b. If blocking, is it timed blocking or "infinite" time blocking? c. If it is infinite, How do I establish a "connect" system call which is O_BLOCKING with timeout to t secs. Sorry if the questions are be very naive. Thanks in advance for your input.

    Read the article

  • Can someone recommend a good tutorial on MySQL indexes, specifically when used in an order by clause

    - by Philip Brocoum
    I could try to post and explain the exact query I'm trying to run, but I'm going by the old adage of, "give a man a fish and he'll eat for a day, teach a man to fish and he'll eat for the rest of his life." SQL optimization seems to be very query-specific, and even if you could solve this one particular query for me, I'm going to have to write many more queries in the future, and I'd like to be educated on how indexes work in general. Still, here's a quick description of my current problem. I have a query that joins three tables and runs in 0.2 seconds flat. Awesome. I add an "order by" clause and it runs in 4 minutes and 30 seconds. Sucky. I denormalize one table so there is one fewer join, add indexes everywhere, and now the query runs in... 20 minutes. What the hell? Finally, I don't use a join at all, but rather a subquery with "where id in (...) order by" and now it runs in 1.5 seconds. Pretty decent. What in God's name is going on? I feel like if I actually understood what indexes were doing I could write some really good SQL. Anybody know some good tutorials? Thanks!

    Read the article

  • Sending an SMS in Android

    - by D4N14L
    Hey, I have been making an Android app which needs to send a text message. Here is the current code I have: public class getMessage extends Service { @Override public IBinder onBind(Intent intent) { return null; } @Override public void onStart(Intent intent, int startId) { super.onStart(intent, startId); client Phone = new client(); String[] msg = Phone.getMsg(user[0],user[1]); PendingIntent pi = PendingIntent.getActivity(this, 0, new Intent(this, getMessage.class), 0); SmsManager man = SmsManager.getDefault(); Log.e("GOT MESSAGE", msg[0]+ " : " +msg[1]); man.sendTextMessage(msg[0], null, msg[1], pi, null); Log.e("Message", "Sent the message?"); } Now, for some reason, the text message will not send using this code, and I'm not sure why. I was hoping that someone here could help me out in finding why this message won't send. No error is raised, nothing appears in the log (except for the log messages that I make myself in the code). Also, the manifest does have the correct tags. Suggestions?

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • Installing backtrack - No Screens Found when running “startx”

    - by Chris
    I'm installing backtrack to use for a project for my wireless networking class. I have an Acer Aspire One netbook and I'm installing from bootable USB stick. When I type "startx", it goes through the motions of giving me a bunch of text, followed by: "Fatal server error: no screens found" What gives? This is a fairly new laptop I guess, but why is this happening? How can I solve this?

    Read the article

  • virsh vcpu_period and vcpu_quota

    - by Programster
    I have been looking into ways to divide my CPU amongst KVM guests other than by just setting vCPU access limits. I understand the concept of cpu_shares which can be set/displayed with virsh schedinfo, but I also found vcpu_period and vcpu_quota listed with this command as shown below: Looking at the man page, I know what the acceptable input values are but could somebody please explain in simple terms what these two parameters actually do?

    Read the article

  • How do I bypass pkgadd signature verification?

    - by Brian Knoblauch
    Trying to install CollabNet Subversion Client on Solaris x64, but I'm hung up with: ## Verifying signature for signer <Alexander Thomas(AT)> pkgadd: ERROR: Signature verification failed while verifying certificate <subject=Alexander Thomas(AT), issuer=Alexander Thomas(AT)>:<self signed certificate>. Any way to just bypass the certificate check? None of the options listed in the man page seemed appropriate.

    Read the article

  • FreeBSD 8.1 64bit logrotate - ELF interpreter /libexec/ld-elf-so.1 not found

    - by Richard Knop
    I am trying to get logrotate running on a FreeBSD 8.1 virtual machine. I installed the logrotate with pkg_add, I have created the logrotate.config file and also run: mkdir /var/lib/ touch /var/lib/logrotate.status Now when I do: /usr/local/sbin/logrotate -d /usr/local/etc/logrotate.conf I get this error: ELF interpreter /libexec/ld-elf-so.1 not found Abort The file ld-elf-so.1 exists: locate ld-elf.so.1 /libexec/ld-elf.so.1 /usr/libexec/ld-elf.so.1 /usr/share/man/man1/ld-elf.so.1.1.gz

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >