Search Results

Search found 9410 results on 377 pages for 'simulator difference'.

Page 43/377 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • Difference between MVC FilterAttribute and Filter

    - by zaaaaphod
    I'm trying to write my own custom AuthorizationAttribute that uses DI. I'm using the MUNQ IoC provider for it's speed and have decided to use constructor injection on all my classes as opposed to post instatiation property binding (because I prefer it). I'm trying to write a custom IFilterProvider that will use my IoC container to return requests for filters (so that I can map concrete classes using the container). I've come up with the following. public class FilterProvider : IFilterProvider { private readonly IocContainer _container; public FilterProvider(IocContainer container) { _container = container; } public IEnumerable<Filter> GetFilters(ControllerContext controllerContext, ActionDescriptor actionDescriptor) { var x = Enumerable.Union<Object>(_container.ResolveAll<IActionFilter>(), _container.ResolveAll<IAuthorizationFilter>()); foreach (Filter actionFilter in x) yield return new Filter(actionFilter, FilterScope.First, null); } } The above code will fail during the foreach because my objects that implement IAuthorizationFilter are based on FilterAttribute and not Filter My question is, what is the difference between Filter and FilterAttribute? I would have thought that there would have been a common link between them, unless I'm missing something. Another deeper question is, how come there is no IFilterAttributeProvider that would support IEnumerable GetFilters(...) Is there some other way that I should be using to resolve IAuthorizationFilter via my IoC container? Thank you very much for your help. Z

    Read the article

  • Performance difference between functions and pattern matching in Mathematica

    - by Samsdram
    So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax. So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented. The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • find the difference between two very large list

    - by user157195
    I have two large list(could be a hundred million items), the source of each list can be either from a database table or a flat file. both lists are of comparable sizes, both unsorted. I need to find the difference between them. so I have 3 scenarios: 1. List1 is a database table(assume each row simply have one item(key) that is a string), List2 is a large file. 2. Both lists are from 2 db tables. 3. both lists are from two files. in case 2, I plan to use: select a.item from MyTable a where a.item not in (select b.item form MyTable b) this clearly is inefficient, is there a better way? Another approach is: I plan to sort each list, and then walk down both of them to find the diff. If the list is from a file, I have to read it into a db table first, then use db sorting to output the list. Is the run time complexity still O(nlogn) in db sorting? either approach is a pain and seems would be very slow when the list involved has hundreds of millions of items. any suggestions?

    Read the article

  • Difference between "Data Binding'","Data Hiding","Data Wraping" and "Encapsulation"?

    - by krishna Chandra
    I have been studying the conpects of Object oriented programming. Still I am not able to distinguish between the following concepts of object oriented programming.. a) Data Binding b) Data Hiding c) Data Wrapping d) encapsulation e) Data Abstraction I have gone through a lot of books ,and I also search the difference in google. but still I am not able to make the difference between these? Could anyone please help me ?

    Read the article

  • Difference in DocumentBuilder.parse when using JRE 1.5 and JDK 1.6

    - by dhiller
    Recently at last we have switched our projects to Java 1.6. When executing the tests I found out that using 1.6 a SAXParseException is not thrown which has been thrown using 1.5. Below is my test code to demonstrate the problem. import java.io.StringReader; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.SchemaFactory; import org.junit.Test; import org.xml.sax.InputSource; import org.xml.sax.SAXParseException; /** * Test class to demonstrate the difference between JDK 1.5 to JDK 1.6. * * Seen on Linux: * * <pre> * #java version "1.6.0_18" * Java(TM) SE Runtime Environment (build 1.6.0_18-b07) * Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) * </pre> * * Seen on OSX: * * <pre> * java version "1.6.0_17" * Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) * Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) * </pre> * * @author dhiller (creator) * @author $Author$ (last editor) * @version $Revision$ * @since 12.03.2010 11:32:31 */ public class TestXMLValidation { /** * Tests the schema validation of an XML against a simple schema. * * @throws Exception * Falls ein Fehler auftritt * @throws junit.framework.AssertionFailedError * Falls eine Unit-Test-Pruefung fehlschlaegt */ @Test(expected = SAXParseException.class) public void testValidate() throws Exception { final StreamSource schema = new StreamSource( new StringReader( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + "<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" " + "elementFormDefault=\"qualified\" xmlns:xsd=\"undefined\">" + "<xs:element name=\"Test\"/>" + "</xs:schema>" ) ); final String xml = "<Test42/>"; final DocumentBuilderFactory newFactory = DocumentBuilderFactory.newInstance(); newFactory.setSchema( SchemaFactory.newInstance( "http://www.w3.org/2001/XMLSchema" ).newSchema( schema ) ); final DocumentBuilder documentBuilder = newFactory.newDocumentBuilder(); documentBuilder.parse( new InputSource( new StringReader( xml ) ) ); } } When using a JVM 1.5 the test passes, on 1.6 it fails with "Expected exception SAXParseException". The Javadoc of the DocumentBuilderFactory.setSchema(Schema) Method says: When errors are found by the validator, the parser is responsible to report them to the user-specified ErrorHandler (or if the error handler is not set, ignore them or throw them), just like any other errors found by the parser itself. In other words, if the user-specified ErrorHandler is set, it must receive those errors, and if not, they must be treated according to the implementation specific default error handling rules. The Javadoc of the DocumentBuilder.parse(InputSource) method says: BTW: I tried setting an error handler via setErrorHandler, but there still is no exception. Now my question: What has changed to 1.6 that prevents the schema validation to throw a SAXParseException? Is it related to the schema or to the xml that I tried to parse?

    Read the article

  • Difference between two datetime strings: setting timezone

    - by Frank Nwoko
    //Difference between 2 dates This function works well but display wrong time format. Pls how can I change the time of this function from GMT to GMT+1? Displays 15hrs 22mins instead of 16hrs 22mins. Thanks function get_date_diff($start, $end="NOW") { $sdate = strtotime($start); $edate = strtotime($end); $timeshift = ""; $time = $edate - $sdate; if($time>=0 && $time<=59) { // Seconds $timeshift = $time.' seconds '; } elseif($time>=60 && $time<=3599) { // Minutes + Seconds $pmin = ($edate - $sdate) / 60; $premin = explode('.', $pmin); $presec = $pmin-$premin[0]; $sec = $presec*60; $timeshift = $premin[0].' min '.round($sec,0).' sec '."<b>ago</b>"; } elseif($time>=3600 && $time<=86399) { // Hours + Minutes $phour = ($edate - $sdate) / 3600; $prehour = explode('.',$phour); $premin = $phour-$prehour[0]; $min = explode('.',$premin*60); $presec = '0.'.$min[1]; $sec = $presec*60; $timeshift = $prehour[0].' hrs '.$min[0].' min '.round($sec,0).' sec '."<b>ago</b>"; } elseif($time>=86400) { // Days + Hours + Minutes $pday = ($edate - $sdate) / 86400; $preday = explode('.',$pday); $phour = $pday-$preday[0]; $prehour = explode('.',$phour*24); $premin = ($phour*24)-$prehour[0]; $min = explode('.',$premin*60); $presec = '0.'.$min[1]; $sec = $presec*60; $timeshift = $preday[0].' days '.$prehour[0].' hrs '.$min[0].' min '.round($sec,0).' sec '."<b>ago</b>"; } return $timeshift; }

    Read the article

  • Windows: what is the difference between DEP always on and DEP opt-out with no exceptions?

    - by Peter Mortensen
    What is the difference between DEP always on ("/NoExecute=AlwaysOn" in boot.ini) and DEP opt-out ( "/NoExecute=OptOut" in boot.ini) with no exceptions? "no exceptions" = empty list of programs for which DEP does not apply. DEP = Data Execution Prevention (hardware). One would expect it to work the same way, but it makes a difference for some applications. E.g. for all versions of UltraEdit 14 (14.2). It crashes at startup for DEP always on, at least on Microsoft Windows XP Professional Edition x64 edition. (2010-03-11: this problem has been fixed with UltraEdit 15.2 and later.) Update 1: I think this difference is caused by the backdoors that Microsoft has put into hardware DEP for OptOut, according to Fabrice Roux (see below). In the case of IrfanView, for which Steve Gibson observed the same difference as I did for UltraEdit (see below), the difference is caused by a non-DEP aware EXE packer (ASPack) that Microsoft coded a backdoor for. Is there a difference between Windows XP, Windows Vista and Windows 7 ? Is there a difference between 32 bit and 64 bit versions of Windows ? Sources: From [http://blog.fabriceroux.com/index.php/2007/02/26/hardware_dep_has_a_backdoor?blog=1], "Hardware DEP has a backdoor" by Fabrice Roux. 2007-02-26. "IrfanView was not using any trick to evade DEP ... Microsoft just coded a backdoor used only in OPTOUT. Bascially Microsoft checks the executable header for a section matching one of the 3 strings. If one these strings is found, DEP will be turned OFF for this application by windows. ... 'aspack', 'pcle', 'sforce'" From [http://www.grc.com/sn/sn-078.htm], by Steve Gibson. "I can’t find any documentation on Microsoft’s site anywhere, because we’re seeing a difference between always-on and opt-out. That is, you would imagine that always-on mode would be the same as opting out if you weren’t having any opt-out programs. It turns out it’s not the case. For example ... the IrfanView file viewer ... runs fine in opt-out mode, even if it has not been opted out. But it won’t launch, Windows blocks it from launching ... in always-on mode." From [http://www.grc.com/sn/sn-083.htm], by Steve Gibson. "... IrfanView ... won’t run with DEP turned on. It’s because it uses an EXE packer, an executable compression program called ASPack. And it makes sense that it wouldn’t because naturally an executable compressor has got to decompress the executable, so it allocates a bunch of data memory into which it decompresses the compressed executable, and then it runs it. Well, it’s running a data allocation, which is exactly what DEP is designed to stop. On the other hand, UPX, which is actually the leading and most popular EXE compressor, it’s DEP- compatible because those guys realized, hey, when we allocate this memory, we should mark the pages as executable."

    Read the article

  • List<T> and IEnumerable difference

    - by Jonas Elfström
    While implementing this generic merge sort, as a kind of Code Kata, I stumbled on a difference between IEnumerable and List that I need help to figure out. Here's the MergeSort public class MergeSort<T> { public IEnumerable<T> Sort(IEnumerable<T> arr) { if (arr.Count() <= 1) return arr; int middle = arr.Count() / 2; var left = arr.Take(middle).ToList(); var right = arr.Skip(middle).ToList(); return Merge(Sort(left), Sort(right)); } private static IEnumerable<T> Merge(IEnumerable<T> left, IEnumerable<T> right) { var arrSorted = new List<T>(); while (left.Count() > 0 && right.Count() > 0) { if (Comparer<T>.Default.Compare(left.First(), right.First()) < 0) { arrSorted.Add(left.First()); left=left.Skip(1); } else { arrSorted.Add(right.First()); right=right.Skip(1); } } return arrSorted.Concat(left).Concat(right); } } If I remove the .ToList() on the left and right variables it fails to sort correctly. Do you see why? Example var ints = new List<int> { 5, 8, 2, 1, 7 }; var mergeSortInt = new MergeSort<int>(); var sortedInts = mergeSortInt.Sort(ints); With .ToList() [0]: 1 [1]: 2 [2]: 5 [3]: 7 [4]: 8 Without .ToList() [0]: 1 [1]: 2 [2]: 5 [3]: 7 [4]: 2 Edit It was my stupid test that got me. I tested it like this: var sortedInts = mergeSortInt.Sort(ints); ints.Sort(); if (Enumerable.SequenceEqual(ints, sortedInts)) Console.WriteLine("ints sorts ok"); just changing the first row to var sortedInts = mergeSortInt.Sort(ints).ToList(); removes the problem (and the lazy evaluation). EDIT 2010-12-29 I thought I would figure out just how the lazy evaluation messes things up here but I just don't get it. Remove the .ToList() in the Sort method above like this var left = arr.Take(middle); var right = arr.Skip(middle); then try this var ints = new List<int> { 5, 8, 2 }; var mergeSortInt = new MergeSort<int>(); var sortedInts = mergeSortInt.Sort(ints); ints.Sort(); if (Enumerable.SequenceEqual(ints, sortedInts)) Console.WriteLine("ints sorts ok"); When debugging You can see that before ints.Sort() a sortedInts.ToList() returns [0]: 2 [1]: 5 [2]: 8 but after ints.Sort() it returns [0]: 2 [1]: 5 [2]: 5 What is really happening here?

    Read the article

  • Difference between SQL 2005 and SQL 2008 for inserting multiple rows with XML

    - by Sam Dahan
    I am using the following SQL code for inserting multiple rows of data in a table. The data is passed to the stored procedure using an XML variable : INSERT INTO MyTable SELECT SampleTime = T.Item.value('SampleTime[1]', 'datetime'), Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/MyRecord') T(item) I have a whole bunch of unit tests to verify that I am inserting the right information, the right number of records, etc.. when I call the stored procedure. All fine and dandy - that is, until we began to monkey around with the compatibility level of the database. The code above worked beautifully as long as we kept the compatibility level of the DB at 90 (SQL 2005). When we set the compatibility level at 100 (SQL 2008), the unit tests failed, because the stored procedure using the code above times out. The unit tests are dropping the database, re-creating it from scripts, and running the tests on the brand new DB, so it's not - I think - a question of the 'old compatibility level' sticking around. Using the SQL Management studio, I made up a quick test SQL script. Using the same XML chunk, I alter the DB compat level , truncate the table, then use the code above to insert 650 rows. When the level is 90 (SQL 2005), it runs in milliseconds. When the level is 100 (SQL 2008) it sometimes takes over a minute, sometimes runs in milliseconds. I'd appreciate any insight anyone might have into that. EDIT The script takes over a minute to run with my actual data, which has more rows than I show here, is a real table, and has an index. With the following example code, the difference goes between milliseconds and around 5 seconds. --use [master] --ALTER DATABASE MyDB SET compatibility_level =100 use [MyDB] declare @xml xml set @xml = '<?xml version="1.0"?> <Root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Record> <SampleTime>2009-01-24T00:00:00</SampleTime> <Volume1>0</Volume1> <Volume2>0</Volume2> </Record> ..... 653 records, sample time spaced out 4 hours ........ </Root>' DECLARE @myTable TABLE( ID int IDENTITY(1,1) NOT NULL, [SampleTime] [datetime] NOT NULL, [Volume1] [float] NULL, [Volume2] [float] NULL) INSERT INTO @myTable select T.Item.value('SampleTime[1]', 'datetime') as SampleTime, Volume1 = T.Item.value('Volume1[1]', 'float'), Volume2 = T.Item.value('Volume2[1]', 'float') FROM @xml.nodes('//Root/Record') T(item) I uncomment the 2 lines at the top, select them and run just that (the ALTER DATABASE statement), then comment the 2 lines, deselect any text and run the whole thing. When I change from 90 to 100, it runs all the time in 5 seconds (I change the level once, but I run the series several times to see if I have consistent results). When I change from 100 to 90, it runs in milliseconds all the time. Just so you can play with it too. I am using SQL Server 2008 R2 standard edition.

    Read the article

  • Is there any functional difference between immutable value types and immutable reference types?

    - by Kendall Frey
    Value types are types which do not have an identity. When one variable is modified, other instances are not. Using Javascript syntax as an example, here is how a value type works. var foo = { a: 42 }; var bar = foo; bar.a = 0; // foo.a is still 42 Reference types are types which do have an identity. When one variable is modified, other instances are as well. Here is how a reference type works. var foo = { a: 42 }; var bar = foo; bar.a = 0; // foo.a is now 0 Note how the example uses mutatable objects to show the difference. If the objects were immutable, you couldn't do that, so that kind of testing for value/reference types doesn't work. Is there any functional difference between immutable value types and immutable reference types? Is there any algorithm that can tell the difference between a reference type and a value type if they are immutable? Reflection is cheating. I'm wondering this mostly out of curiosity.

    Read the article

  • Why does GLSL's arithmetic functions yield so different results on the iPad than on the simulator?

    - by cheeesus
    I'm currently chasing some bugs in my OpenGL ES 2.0 fragment shader code which is running on iOS devices. The code runs fine in the simulator, but on the iPad it has huge problems and some of the calculations yield vastly different results, I had for example 0.0 on the iPad and 4013.17 on the simulator, so I'm not talking about small differences which could be the result of some rounding errors. One of the things I noticed is that, on the iPad, float1 = pow(float2, 2.0); can yield results which are very different from the results of float1 = float2 * float2; Specifically, when using pow(x, 2.0) on a variable containing a larger negative number like -8, it seemed to return a value which satified the condition if (powResult <= 0.0). Also, the result of both operations (pow(x, 2.0) as well as x*x) yields different results in the simulator than on the iPad. Used floats are mediump, but I get the same stuff with highp. Is there a simple explanation for those differences? I'm narrowing the problem down, but it takes so much time, so maybe someone can help me here with a simple explanation.

    Read the article

  • Why does my plist file exist even when I've uninstalled an app and wiped the simulator?

    - by just_another_coder
    I am trying to detect the existence of a plist file on application start. When my data class (descendant of NSObject) is initialized, it loads the plist file. Before it loads, it checks to see if the file exists, and if it doesn't, it sets the data = nil instead of trying to process the array of data in the plist file. However, every time it starts it indicates the file exists. Even when I uninstall the app in the organizer and when wiping the simulator directory (iPhone Simulator/3.0/Applications). I have the following static method in a library class named FileIO: +(BOOL) fileExists:(NSString *)fileName { NSString *path = [FileIO getPath:fileName]; // gets the path to the docs directory if ([[NSFileManager defaultManager] fileExistsAtPath:path]) return YES; else return NO; } In my data class: //if( [FileIO fileExists:kFavorites_Filename] ) [FileIO deleteFile:kFavorites_Filename]; if ([FileIO fileExists:kFavorites_Filename]) {NSLog(@"exists"); The "exists" is logged every time unless I un-comment the method to delete the file. How can the file exist if I've deleted the app? I've deleted the simulator directory and uninstalled on the device but both say it still exists. What gives???

    Read the article

  • Running OpenStack Icehouse with ZFS Storage Appliance

    - by Ronen Kofman
    Couple of months ago Oracle announced the support for OpenStack Cinder plugin with ZFS Storage Appliance (aka ZFSSA).  With our recent release of the Icehouse tech preview I thought it is a good opportunity to demonstrate the ZFSSA plugin working with Icehouse. One thing that helps a lot to get started with ZFSSA is that it has a VirtualBox simulator. This simulator allows users to try out the appliance’s features before getting to a real box. Users can test the functionality and design an environment even before they have a real appliance which makes the deployment process much more efficient. With OpenStack this is especially nice because having a simulator on the other end allows us to test the complete set of the Cinder plugin and check the entire integration on a single server or even a laptop. Let’s see how this works Installing and Configuring the Simulator To get started we first need to download the simulator, the simulator is available here, unzip it and it is ready to be imported to VirtualBox. If you do not already have VirtualBox installed you can download it from here according to your platform of choice. To import the simulator go to VirtualBox console File -> Import Appliance , navigate to the location of the simulator and import the virtual machine. When opening the virtual machine you will need to make the following changes: - Network – by default the network is “Host Only” , the user needs to change that to “Bridged” so the VM can connect to the network and be accessible. - Memory (optional) – the VM comes with a default of 2560MB which may be fine but if you have more memory that could not hurt, in my case I decided to give it 8192 - vCPU (optional) – the default the VM comes with 1 vCPU, I decided to change it to two, you are welcome to do so too. And here is how the VM looks like: Start the VM, when the boot process completes we will need to change the root password and the simulator is running and ready to go. Now that the simulator is up and running we can access simulated appliance using the URL https://<IP or DNS name>:215/, the IP is showing on the virtual machine console. At this stage we will need to configure the appliance, in my case I did not change any of the default (in other words pressed ‘commit’ several times) and the simulated appliance was configured and ready to go. We will need to enable REST access otherwise Cinder will not be able to call the appliance we do that in Configuration->Services and at the end of the page there is ‘REST’ button, enable it. If you are a more advanced user you can set additional features in the appliance but for the purpose of this demo this is sufficient. One final step will be to create a pool, go to Configuration -> Storage and add a pool as shown below the pool is named “default”: The simulator is now running, configured and ready for action. Configuring Cinder Back to OpenStack, I have a multi node deployment which we created according to the “Getting Started with Oracle VM, Oracle Linux and OpenStack” guide using Icehouse tech preview release. Now we need to install and configure the ZFSSA Cinder plugin using the README file. In short the steps are as follows: 1. Copy the file from here to the control node and place them at: /usr/lib/python2.6/site-packages/cinder/volume/drivers/zfssa 2. Configure the plugin, editing /etc/cinder/cinder.conf # Driver to use for volume creation (string value) #volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_driver=cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver zfssa_host = <HOST IP> zfssa_auth_user = root zfssa_auth_password = <ROOT PASSWORD> zfssa_pool = default zfssa_target_portal = <HOST IP>:3260 zfssa_project = test zfssa_initiator_group = default zfssa_target_interfaces = e1000g0 3. Restart the cinder-volume service: service openstack-cinder-volume restart 4. Look into the log file, this will tell us if everything works well so far. If you see any errors fix them before continuing. 5. Install iscsi-initiator-utils package, this is important since the plugin uses iscsi commands from this package: yum install -y iscsi-initiator-utils The installation and configuration are very simple, we do not need to have a “project” in the ZFSSA but we do need to define a pool. Creating and Using Volumes in OpenStack We are now ready to work, to get started lets create a volume in OpenStack and see it showing up on the simulator: #  cinder create 2 --display-name my-volume-1 +---------------------+--------------------------------------+ |       Property      |                Value                 | +---------------------+--------------------------------------+ |     attachments     |                  []                  | |  availability_zone  |                 nova                 | |       bootable      |                false                 | |      created_at     |      2014-08-12T04:24:37.806752      | | display_description |                 None                 | |     display_name    |             my-volume-1              | |      encrypted      |                False                 | |          id         | df67c447-9a36-4887-a8ff-74178d5d06ee | |       metadata      |                  {}                  | |         size        |                  2                   | |     snapshot_id     |                 None                 | |     source_volid    |                 None                 | |        status       |               creating               | |     volume_type     |                 None                 | +---------------------+--------------------------------------+ In the simulator: Extending the volume to 5G: # cinder extend df67c447-9a36-4887-a8ff-74178d5d06ee 5 In the simulator: Creating templates using Cinder Volumes By default OpenStack supports ephemeral storage where an image is copied into the run area during instance launch and deleted when the instance is terminated. With Cinder we can create persistent storage and launch instances from a Cinder volume. Booting from volume has several advantages, one of the main advantages of booting from volumes is speed. No matter how large the volume is the launch operation is immediate there is no copying of an image to a run areas, an operation which can take a long time when using ephemeral storage (depending on image size). In this deployment we have a Glance image of Oracle Linux 6.5, I would like to make it into a volume which I can boot from. When creating a volume from an image we actually “download” the image into the volume and making the volume bootable, this process can take some time depending on the image size, during the download we will see the following status: # cinder create --image-id 487a0731-599a-499e-b0e2-5d9b20201f0f --display-name ol65 2 # cinder list +--------------------------------------+-------------+--------------+------+-------------+ |                  ID                  |    Status   | Display Name | Size | Volume Type | … +--------------------------------------+-------------+--------------+------+------------- | df67c447-9a36-4887-a8ff-74178d5d06ee |  available  | my-volume-1  |  5   |     None    | … | f61702b6-4204-4f10-8bdf-7da792f15c28 | downloading |     ol65     |  2   |     None    | … +--------------------------------------+-------------+--------------+------+-------------+ After the download is complete we will see that the volume status changed to “available” and that the bootable state is “true”. We can use this new volume to boot an instance from or we can use it as a template. Cinder can create a volume from another volume and ZFSSA can replicate volumes instantly in the back end. The result is an efficient template model where users can spawn an instance from a “template” instantly even if the template is very large in size. Let’s try replicating the bootable volume with the Oracle Linux 6.5 on it creating additional 3 bootable volumes: # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-1 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-2 # cinder create 2 --source-volid f61702b6-4204-4f10-8bdf-7da792f15c28 --display-name ol65-bootable-3 # cinder list +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ |                  ID                  |   Status  |   Display Name  | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ | 9bfe0deb-b9c7-4d97-8522-1354fc533c26 | available | ol65-bootable-2 |  2   |     None    |   true   |             | | a311a855-6fb8-472d-b091-4d9703ef6b9a | available | ol65-bootable-1 |  2   |     None    |   true   |             | | df67c447-9a36-4887-a8ff-74178d5d06ee | available |   my-volume-1   |  5   |     None    |  false   |             | | e7fbd2eb-e726-452b-9a88-b5eee0736175 | available | ol65-bootable-3 |  2   |     None    |   true   |             | | f61702b6-4204-4f10-8bdf-7da792f15c28 | available |       ol65      |  2   |     None    |   true   |             | +--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+ Note that the creation of those 3 volume was almost immediate, no need to download or copy, ZFSSA takes care of the volume copy for us. Start 3 instances: # nova boot --boot-volume a311a855-6fb8-472d-b091-4d9703ef6b9a --flavor m1.tiny ol65-instance-1 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume 9bfe0deb-b9c7-4d97-8522-1354fc533c26 --flavor m1.tiny ol65-instance-2 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca # nova boot --boot-volume e7fbd2eb-e726-452b-9a88-b5eee0736175 --flavor m1.tiny ol65-instance-3 --nic net-id=25b19746-3aea-4236-8193-4c6284e76eca Instantly replicating volumes is a very powerful feature, especially for large templates. The ZFSSA Cinder plugin allows us to take advantage of this feature of ZFSSA. By offloading some of the operations to the array OpenStack create a highly efficient environment where persistent volume can be instantly created from a template. That’s all for now, with this environment you can continue to test ZFSSA with OpenStack and when you are ready for the real appliance the operations will look the same. @RonenKofman

    Read the article

  • All command need privilage

    - by Am1rr3zA
    I try to install Hping3 in my ubuntu 8.04 but after installation when I want to Hping3 I got this error: Command 'hping3' is available in '/usr/sbin/hping3' The command could not be located because '/usr/sbin' is not included in the PATH environment variable. This is most likely caused by the lack of administrative priviledges associated with your user account. also when I try to run ifconfig I get this: Command 'ifconfig' is available in '/sbin/ifconfig' The command could not be located because '/sbin' is not included in the PATH environment variable. This is most likely caused by the lack of administrative priviledges associated with your user account. first it needs to run sudo su and then run the command. is it normal? or I miss something? when I run echo $PATH I get: /usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/home/amirreza/simulator/ns-allinone-2.33/bin:/home/amirreza/simulator/ns-allinone-2.33/tcl8.4.18/unix:/home/amirreza/simulator/ns-allinone-2.33/tk8.4.18/unix:/home/amirreza/simulator/ns-allinone-2.33/ns-2.33/:/home/amirreza/simulator/ns-allinone-2.33/nam-1.14/

    Read the article

  • What is the difference (if any) between Html.Partial(view, model) and Html.RenderPartial(view,model)

    - by Stephane
    Other than the type it returns and the fact that you call it differently of course <% Html.RenderPartial(...); %> <%= Html.Partial(...) %> If they are different, why would you call one rather than the other one? The definitions: // Type: System.Web.Mvc.Html.RenderPartialExtensions // Assembly: System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 // Assembly location: C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET MVC 2\Assemblies\System.Web.Mvc.dll using System.Web.Mvc; namespace System.Web.Mvc.Html { public static class RenderPartialExtensions { public static void RenderPartial(this HtmlHelper htmlHelper, string partialViewName); public static void RenderPartial(this HtmlHelper htmlHelper, string partialViewName, ViewDataDictionary viewData); public static void RenderPartial(this HtmlHelper htmlHelper, string partialViewName, object model); public static void RenderPartial(this HtmlHelper htmlHelper, string partialViewName, object model, ViewDataDictionary viewData); } } // Type: System.Web.Mvc.Html.PartialExtensions // Assembly: System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 // Assembly location: C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET MVC 2\Assemblies\System.Web.Mvc.dll using System.Web.Mvc; namespace System.Web.Mvc.Html { public static class PartialExtensions { public static MvcHtmlString Partial(this HtmlHelper htmlHelper, string partialViewName); public static MvcHtmlString Partial(this HtmlHelper htmlHelper, string partialViewName, ViewDataDictionary viewData); public static MvcHtmlString Partial(this HtmlHelper htmlHelper, string partialViewName, object model); public static MvcHtmlString Partial(this HtmlHelper htmlHelper, string partialViewName, object model, ViewDataDictionary viewData); } }

    Read the article

  • Difference between "machine hardware" and "hardware platform"

    - by Adil
    My Linux machine reports "uname -a" outputs as below:- [root@tom i386]# uname -a Linux tom 2.6.9-89.ELsmp #1 SMP Mon Apr 20 10:34:33 EDT 2009 i686 i686 i386 GNU/Linux [root@tom i386]# As per man page of uname, the entries "i686 i686 i386" denotes:- machine hardware name (i686) processor type (i686) hardware platform (i386) Additional info: [root@tom i386]# cat /proc/cpuinfo <snip> vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU 5148 @ 2.33GHz stepping : 6 cpu MHz : 2328.038 cache size : 4096 KB </snip>

    Read the article

  • Terminology: Difference between software interface, software component, software unit, software modu

    - by JamieH
    I see these terms used quite a lot between various authors, but I can't seem to fix upon definitive definitions. From my POV a software interface is a "type" specifying the way in which a software component may be used by other softare components. But what exactly a software component is I'm not entirely sure (and it seems no-one else is either). Same goes for software unit, and software module, although I suspect that a software unit is a smaller, ahem, unit than a component, and a software module has something to do with packaging. I hope this is not deemed (and downvoted) as frivulous, as I have serious intent in the asking.

    Read the article

  • Objective-C, .m / .mm performance difference?

    - by Sam
    I tend to use the .mm extension by default when creating new classes so that I can use ObjC++ later on if I require it. Is there any disadvantage to doing this? When would you prefer .m? Does .m compile to a faster executable (since C is generally faster than C++)?

    Read the article

  • Difference between WSDL 2.0, WADL & XRD?

    - by beriberikix
    WSDL 2.0: www.w3.org/TR/wsdl20/ WADL www.w3.org/Submission/wadl/ XRD www.oasis-open.org/committees/download.php/35274/xrd-1.0-wd10.html All three can be used a REST API descriptors. What's the differences? I know this is a heated question, but I simply want a comparison, not a flame war :P

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >