Search Results

Search found 12083 results on 484 pages for 'general purpose'.

Page 119/484 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Clarification of atomic memory access for different OSs

    - by murrekatt
    I'm currently porting a Windows C++ library to MacOS as a hobby project as a learning experience. I stumbled across some code using the Win Interlocked* functions and thus I've been trying to read up on the subject in general. Reading related questions here in SO, I understand there are different ways to do these operations depending on the OS. Interlocked* in Windows, OSAtomic* in MacOS and I also found that compilers have builtin (intrinsic) operations for this. After reading gcc builtin atomic memory access, I'm left wondering what is the difference between intrinsic and the OSAtomic* or Interlocked* ones? I mean, can I not choose between OSAtomic* or gcc builtin if I'm on MacOS when I use gcc? The same if I'd be on Windows using gcc. I also read that on Windows Interlocked* come as both inline and intrinsic versions. What to consider when choosing between intrinsic or inline? In general, are there multiple options on OSs what to use? Or is this again "it depends"? If so, what does it depend on? Thanks!

    Read the article

  • Are SQL Reporting Services Report Parameters deprecated in VS.NET 2010?

    - by Jason Kealey
    We use an Reporting Services inside an ASP.NET web application. (We have an *.rdlc which is presented to the ReportViewer web control in our page). Our ASPX page wires up a few report parameters in code: var parameters = new List<ReportParameter>(); parameters.Add(new ReportParameter("StoreAddress", InvoiceStoreAddress)); parameters.Add(new ReportParameter("LogoURL", InvoiceLogoURL)); parameters.Add(new ReportParameter("StoreName", InvoiceStoreName)); ReportViewer1.LocalReport.SetParameters(parameters); These are just general parameters that are passed to the report, instead of hooking it up to a data source. Recently, we upgraded to VS.NET 2010. We upgraded the *.rdlc to the newest version and also upgraded the ReportViewer control used by ASP.NET. Everything works as it did before. However, I now want to add a new report parameter to my *.rdlc. I typically right-clicked on the top left corner and clicked on "Report Parameters" to add it. With the new VS.NET, I cannot find this option anywhere - it is not even in the report properties. Where did it go? Are the deprecating this feature? How should I be passing some general parameters now?

    Read the article

  • My list item's child label elements disappear in IE on accordion menu opening

    - by Scott B
    I've got an app that's working pretty flawlessly in Chrome and FF, however, when I view it in IE, all is well until I click on a header element to activate it (jQuery accordion). What happens then is that I see a brief flash where the content is there, then suddenly the entire left column disappears. This column is generated by a floated label element with a class of ".left" as seen below... <ul class="menu collapsible"> <li class='expand sectionTitle'><a href='#'>General Settings</a> <ul class='acitem'> <li class="section"> <label class="left">This item if floated left with a defined width of 190px via css. This is the item that's disappearing after a brief display</label> <input class="input" value="input element here" /> <label class="description">This element has a margin-left:212px; set via css in order to be positioned to the right of the label element as if in an adjacent table cell. When I add a max-width property to this element, it disappears in IE too!</label> </li> </ul> </li> </ul> As you can see from the comments in the code above (for the two label elements) the description label disappears once I set a max-width on it (I don't have a max-width on the left label element, but it disappears nonetheless). The initial view of this UL menu is fine (note the expand class declaration which makes this part of the accordion open at startup. Its not until I click the "General Settings" to toggle it closed, then back open, that the left class elements disappear (and only in IE)

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • Looking for Go equivalent of scanf.

    - by Stephen Hsu
    I'm looking for the Go equivalent of scanf(). I tried with following code: 1 package main 2 3 import ( 4 "scanner" 5 "os" 6 "fmt" 7 ) 8 9 func main() { 10 var s scanner.Scanner 11 s.Init(os.Stdin) 12 s.Mode = scanner.ScanInts 13 tok := s.Scan() 14 for tok != scanner.EOF { 15 fmt.Printf("%d ", tok) 16 tok = s.Scan() 17 } 18 fmt.Println() 19 } I run it with input from a text with a line of integers. But it always output -3 -3 ... And how to scan a line composed of a string and some integers? Changing the mode whenever encounter a new data type? The Package documentation: Package scanner A general-purpose scanner for UTF-8 encoded text. But it seems that the scanner is not for general use. Updated code: func main() { n := scanf() fmt.Println(n) fmt.Println(len(n)) } func scanf() []int { nums := new(vector.IntVector) reader := bufio.NewReader(os.Stdin) str, err := reader.ReadString('\n') for err != os.EOF { fields := strings.Fields(str) for _, f := range fields { i, _ := strconv.Atoi(f) nums.Push(i) } str, err = reader.ReadString('\n') } r := make([]int, nums.Len()) for i := 0; i < nums.Len(); i++ { r[i] = nums.At(i) } return r }

    Read the article

  • Should checkins be small steps or complete features?

    - by Caspin
    Two of version controls uses seem to dictate different checkin styles. distibution centric: changesets will generally reflect a complete feature. In general these checkins will be larger. This style is more user/maintainer friendly. rollback centric: changesets will be individual small steps so the history can function like an incredibly powerful undo. In general these checkins will be smaller. This style is more developer friendly. I like to use my version control as really powerful undo while while I banging away at some stubborn code/bug. In this way I'm not afraid to make drastic changes just to try out a possible solution. However, this seems to give me a fragmented file history with lots of "well that didn't work" checkins. If instead I try to have my changeset reflect complete features I loose the use of my version control software for experimentation. However, it is much easier for user/maintainers to figure out how the code is evolving. Which has great advantages for code reviews, managing multiple branches, etc. So what's a developer to do? checkin small steps or complete features?

    Read the article

  • Multiple Solution Layout for ASP.NET Web Portal?

    - by Jared S
    At work, we've developed a custom ASP.NET Web Portal (That's very similar to iGoogle). We have "Apps" (self-contained, large web forms) and "Modules" (similar to Google Gadgets). Currently, we use a single-solution model. Right now, we have: 3 core projects 60 application projects 80 module projects To reduce copy and pasting between projects, we're going to factor out common functionality (Data Access, Business Logic) into separate projects. I'd also like to introduce Unit Tests, which is going to increase the number of projects even more. We've already reached the point where Visual Studio is choking on the number of projects. We generally only load the 3 core projects and then whatever app's/module's project we're working on. Would a different solution structure help us out? Our number of projects is only going to increase. In general, an app or module only references the 3 core projects. Soon, apps/modules may start referencing the Data Access/Business Logic projects. But in general, apps and modules do not make references between themselves. So to recap, what is the best practice for solution structure when there are MANY projects that use a small number of core projects?

    Read the article

  • Multi-client C# ODBC (Sybase/Oracle/MSSQL) table access question.

    - by Hamish Grubijan
    I am working on a feature that would allow clients pick a unique identifier (ci_name). The code below is a generic version that gets expanded to the right sql depending on the vendor. Hopefully it makes sense. #include "sql.h" create table client_identification ( ci_id TYPE_ID IDENTITY, ci_name varchar(64) not null, constraint ci_pk primary key (ci_name) ); go CREATE_SEQUENCE(ci_id) There will be simple stored procedures for adding, retrieving, and deleting these user records. This will be used by several admins. This will not happen very frequently, but there is still a possibility that something will be added or deleted since the list was initially retrieved. I have not yet decided if I need to detect the case of a double delete, but the user name cannot be created twice - primary key constraint will object. I want to be able to detect this particular case and display something like: "you snooze - you loose." :) I would like to leverage the pk constraint instead of doing some extra sql gymnastics. So, how can I detect this case cleanly, so that it works in MS SQL 2008, Sybase, and Oracle? I hope to do better than catch a general ODBC exception and parse out the text and look for what Sybase, Oracle, and MSSQL would give me back. Oracle is a little different. We actually prepend these variables to the Oracle version of stored procedures because they are not available otherwise: Vret_val out number, Vtran_count in out number, Vmessage_count in out number, Thanks. General helpful tips and comments are welcome, except for naming convention ones ( I do not have a choice here, plus I mangled the actual names a bit).

    Read the article

  • Python vs all the major professional languages [closed]

    - by Matt
    I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros. Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages. Is this idea more or less accurate? (I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.) Anyway, thanks for any input

    Read the article

  • Basic HTML Internal Link not working in IE8

    - by Eamonn
    This one has me scratching my head. I have a page with several internal links bringing the user down the page to further content. Beneath various sections there are further internal links bringing the user back to the top. This works fine in all browsers apart from IE8 (actual IE8 - not IE9 in IE8 mode) where they work go down, but not coming back up! Examples: <a href="page.php?v=35&amp;u=admin-videos#a">A &ndash; General</a> .... //travelling DOWN the page <h3><strong>A &ndash; General<a name="a"></a></strong></h3> <a name="top"></a> .... //travelling back up <a href="page.php?v=35&amp;u=admin-videos#top">Back to top.</a> I've tried filling the 'top' anchor with &nbsp; but that hasn't changed it. Any ideas? Actual use case: http://databizsolutions.ie/contents/page.php?v=35&u=admin-videos

    Read the article

  • When to use reinterpret_cast?

    - by HeretoLearn
    I am little confused with the applicability of reinterpret_cast vs static_cast. From what I have read the general rules are to use static cast when the types can be interpreted at compile time hence the word static. This is the cast the C++ compiler uses internally for implicit casts also. reinterpret_cast are applicable in two scenarios, convert integer types to pointer types and vice versa or to convert one pointer type to another. The general idea I get is this is unportable and should be avoided. Where I am a little confused is one usage which I need, I am calling C++ from C and the C code needs to hold on to the C++ object so basically it holds a void*. What cast should be used to convert between the void * and the Class type? I have seen usage of both static_cast and reinterpret_cast? Though from what I have been reading it appears static is better as the cast can happen at compile time? Though it says to use reinterpret_cast to convert from one pointer type to another?

    Read the article

  • Communicating with a running python daemon

    - by hanksims
    I wrote a small Python application that runs as a daemon. It utilizes threading and queues. I'm looking for general approaches to altering this application so that I can communicate with it while it's running. Mostly I'd like to be able to monitor its health. In a nutshell, I'd like to be able to do something like this: python application.py start # launches the daemon Later, I'd like to be able to come along and do something like: python application.py check_queue_size # return info from the daemonized process To be clear, I don't have any problem implementing the Django-inspired syntax. What I don't have any idea how to do is to send signals to the daemonized process (start), or how to write the daemon to handle and respond to such signals. Like I said above, I'm looking for general approaches. The only one I can see right now is telling the daemon constantly log everything that might be needed to a file, but I hope there's a less messy way to go about it. UPDATE: Wow, a lot of great answers. Thanks so much. I think I'll look at both Pyro and the web.py/Werkzeug approaches, since Twisted is a little more than I want to bite off at this point. The next conceptual challenge, I suppose, is how to go about talking to my worker threads without hanging them up. Thanks again.

    Read the article

  • Overhead of calling tiny functions from a tight inner loop? [C++]

    - by John
    Say you see a loop like this one: for(int i=0; i<thing.getParent().getObjectModel().getElements(SOME_TYPE).count(); ++i) { thing.getData().insert( thing.GetData().Count(), thing.getParent().getObjectModel().getElements(SOME_TYPE)[i].getName() ); } if this was Java I'd probably not think twice. But in performance-critical sections of C++, it makes me want to tinker with it... however I don't know if the compiler is smart enough to make it futile. This is a made up example but all it's doing is inserting strings into a container. Please don't assume any of these are STL types, think in general terms about the following: Is having a messy condition in the for loop going to get evaluated each time, or only once? If those get methods are simply returning references to member variables on the objects, will they be inlined away? Would you expect custom [] operators to get optimized at all? In other words is it worth the time (in performance only, not readability) to convert it to something like: ElementContainer &source = thing.getParent().getObjectModel().getElements(SOME_TYPE); int num = source.count(); Store &destination = thing.getData(); for(int i=0;i<num;++i) { destination.insert(thing.GetData().Count(), source[i].getName(); } Remember, this is a tight loop, called millions of times a second. What I wonder is if all this will shave a couple of cycles per loop or something more substantial? Yes I know the quote about "premature optimisation". And I know that profiling is important. But this is a more general question about modern compilers, Visual Studio in particular.

    Read the article

  • Rails 3 MySQL 2 reports an error in what looks to be valid SQL syntax

    - by John Judd
    I am trying to use the following bit of code to help in seeding my database. I need to add data continually over development and do not want to have to completely reseed data every time I add something new to the seeds.rb file. So I added the following function to insert the data if it doesn't already exist. def AddSetting(group, name, value, desc) Admin::Setting.create({group: group, name: name, value: value, description: desc}) unless Admin::Setting.find_by_sql("SELECT * FROM admin_settings WHERE group = '#{group}' AND name = '#{name}';").exists? end AddSetting('google', 'analytics_id', '', 'The ID of your Google Analytics account.') AddSetting('general', 'page_title', '', '') AddSetting('general', 'tag_line', '', '') This function is included in the db/seeds.rb file. Is this the right way to do this? However I am getting the following error when I try to run it through rake. rake aborted! Mysql2::Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group = 'google' AND name = 'analytics_id'' at line 1: SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; Tasks: TOP => db:seed (See full trace by running task with --trace) Process finished with exit code 1 What is confusing me is that I am generating correct SQL as far as I can tell. In fact my code generates the SQL and I pass that to the find_by_sql function for the model, Rails itself can't be changing the SQL, or is it? SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; I've written a lot of SQL over the years and I've looked through similar questions here. Maybe I've missed something, but I cannot see it.

    Read the article

  • Is it possible to cancel function override in parent class and use function from top level parent

    - by Anatoliy Gusarov
    class TopParent { protected function foo() { $this->bar(); } private function bar() { echo 'Bar'; } } class MidParent extends TopParent { protected function foo() { $this->midMethod(); parent::foo(); } public function midMethod() { echo 'Mid'; } public function generalMethod() { echo 'General'; } } Now the question is if I have a class, that extends MidParent because I need to call class Target extends MidParent { //How to override this method to return TopParent::foo(); ? protected function foo() { } } So I need to do this: $mid = new MidParent(); $mid->foo(); // MidBar $taget = new Target(); $target->generalMethod(); // General $target->foo(); // Bar UPDATE Top parent is ActiveRecord class, mid is my model object. I want to use model in yii ConsoleApplication. I use 'user' module in this model, and console app doesn't support this module. So I need to override method afterFind, where user module is called. So the Target class is the class that overrides some methods from model which uses some modules that console application doesn't support.

    Read the article

  • learning C++ from java, trying to make a linked list.

    - by kyeana
    I just started learning c++ (coming from java) and am having some serious problems with doing anything :P Currently, i am attempting to make a linked list, but must be doing something stupid cause i keep getting "void value not ignored as it ought to be" compile errors (i have it marked where it is throwing it bellow). If anyone could help me with what im doing wrong, i would be very grateful :) Also, I am not used to having the choice of passing by reference, address, or value, and memory management in general (currently i have all my nodes and the data declared on the heap). If anyone has any general advice for me, i also wouldn't complain :P Key code from LinkedListNode.cpp LinkedListNode::LinkedListNode() { //set next and prev to null pData=0; //data needs to be a pointer so we can set it to null for //for the tail and head. pNext=0; pPrev=0; } /* * Sets the 'next' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetNext(LinkedListNode& _next) { pNext=&_next; } /* * Sets the 'prev' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetPrev(LinkedListNode& _prev) { pPrev=&_prev; } //rest of class Key code from LinkedList.cpp #include "LinkedList.h" LinkedList::LinkedList() { // Set head and tail of linked list. pHead = new LinkedListNode(); pTail = new LinkedListNode(); /* * THIS IS WHERE THE ERRORS ARE. */ *pHead->SetNext(*pTail); *pTail->SetPrev(*pHead); } //rest of class

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Sorting Algorithms

    - by MarkPearl
    General Every time I go back to university I find myself wading through sorting algorithms and their implementation in C++. Up to now I haven’t really appreciated their true value. However as I discovered this last week with Dictionaries in C# – having a knowledge of some basic programming principles can greatly improve the performance of a system and make one think twice about how to tackle a problem. I’m going to cover briefly in this post the following: Selection Sort Insertion Sort Shellsort Quicksort Mergesort Heapsort (not complete) Selection Sort Array based selection sort is a simple approach to sorting an unsorted array. Simply put, it repeats two basic steps to achieve a sorted collection. It starts with a collection of data and repeatedly parses it, each time sorting out one element and reducing the size of the next iteration of parsed data by one. So the first iteration would go something like this… Go through the entire array of data and find the lowest value Place the value at the front of the array The second iteration would go something like this… Go through the array from position two (position one has already been sorted with the smallest value) and find the next lowest value in the array. Place the value at the second position in the array This process would be completed until the entire array had been sorted. A positive about selection sort is that it does not make many item movements. In fact, in a worst case scenario every items is only moved once. Selection sort is however a comparison intensive sort. If you had 10 items in a collection, just to parse the collection you would have 10+9+8+7+6+5+4+3+2=54 comparisons to sort regardless of how sorted the collection was to start with. If you think about it, if you applied selection sort to a collection already sorted, you would still perform relatively the same number of iterations as if it was not sorted at all. Many of the following algorithms try and reduce the number of comparisons if the list is already sorted – leaving one with a best case and worst case scenario for comparisons. Likewise different approaches have different levels of item movement. Depending on what is more expensive, one may give priority to one approach compared to another based on what is more expensive, a comparison or a item move. Insertion Sort Insertion sort tries to reduce the number of key comparisons it performs compared to selection sort by not “doing anything” if things are sorted. Assume you had an collection of numbers in the following order… 10 18 25 30 23 17 45 35 There are 8 elements in the list. If we were to start at the front of the list – 10 18 25 & 30 are already sorted. Element 5 (23) however is smaller than element 4 (30) and so needs to be repositioned. We do this by copying the value at element 5 to a temporary holder, and then begin shifting the elements before it up one. So… Element 5 would be copied to a temporary holder 10 18 25 30 23 17 45 35 – T 23 Element 4 would shift to Element 5 10 18 25 30 30 17 45 35 – T 23 Element 3 would shift to Element 4 10 18 25 25 30 17 45 35 – T 23 Element 2 (18) is smaller than the temporary holder so we put the temporary holder value into Element 3. 10 18 23 25 30 17 45 35 – T 23   We now have a sorted list up to element 6. And so we would repeat the same process by moving element 6 to a temporary value and then shifting everything up by one from element 2 to element 5. As you can see, one major setback for this technique is the shifting values up one – this is because up to now we have been considering the collection to be an array. If however the collection was a linked list, we would not need to shift values up, but merely remove the link from the unsorted value and “reinsert” it in a sorted position. Which would reduce the number of transactions performed on the collection. So.. Insertion sort seems to perform better than selection sort – however an implementation is slightly more complicated. This is typical with most sorting algorithms – generally, greater performance leads to greater complexity. Also, insertion sort performs better if a collection of data is already sorted. If for instance you were handed a sorted collection of size n, then only n number of comparisons would need to be performed to verify that it is sorted. It’s important to note that insertion sort (array based) performs a number item moves – every time an item is “out of place” several items before it get shifted up. Shellsort – Diminishing Increment Sort So up to now we have covered Selection Sort & Insertion Sort. Selection Sort makes many comparisons and insertion sort (with an array) has the potential of making many item movements. Shellsort is an approach that takes the normal insertion sort and tries to reduce the number of item movements. In Shellsort, elements in a collection are viewed as sub-collections of a particular size. Each sub-collection is sorted so that the elements that are far apart move closer to their final position. Suppose we had a collection of 15 elements… 10 20 15 45 36 48 7 60 18 50 2 19 43 30 55 First we may view the collection as 7 sub-collections and sort each sublist, lets say at intervals of 7 10 60 55 – 20 18 – 15 50 – 45 2 – 36 19 – 48 43 – 7 30 10 55 60 – 18 20 – 15 50 – 2 45 – 19 36 – 43 48 – 7 30 (Sorted) We then sort each sublist at a smaller inter – lets say 4 10 55 60 18 – 20 15 50 2 – 45 19 36 43 – 48 7 30 10 18 55 60 – 2 15 20 50 – 19 36 43 45 – 7 30 48 (Sorted) We then sort elements at a distance of 1 (i.e. we apply a normal insertion sort) 10 18 55 60 2 15 20 50 19 36 43 45 7 30 48 2 7 10 15 18 19 20 30 36 43 45 48 50 55 (Sorted) The important thing with shellsort is deciding on the increment sequence of each sub-collection. From what I can tell, there isn’t any definitive method and depending on the order of your elements, different increment sequences may perform better than others. There are however certain increment sequences that you may want to avoid. An even based increment sequence (e.g. 2 4 8 16 32 …) should typically be avoided because it does not allow for even elements to be compared with odd elements until the final sort phase – which in a way would negate many of the benefits of using sub-collections. The performance on the number of comparisons and item movements of Shellsort is hard to determine, however it is considered to be considerably better than the normal insertion sort. Quicksort Quicksort uses a divide and conquer approach to sort a collection of items. The collection is divided into two sub-collections – and the two sub-collections are sorted and combined into one list in such a way that the combined list is sorted. The algorithm is in general pseudo code below… Divide the collection into two sub-collections Quicksort the lower sub-collection Quicksort the upper sub-collection Combine the lower & upper sub-collection together As hinted at above, quicksort uses recursion in its implementation. The real trick with quicksort is to get the lower and upper sub-collections to be of equal size. The size of a sub-collection is determined by what value the pivot is. Once a pivot is determined, one would partition to sub-collections and then repeat the process on each sub collection until you reach the base case. With quicksort, the work is done when dividing the sub-collections into lower & upper collections. The actual combining of the lower & upper sub-collections at the end is relatively simple since every element in the lower sub-collection is smaller than the smallest element in the upper sub-collection. Mergesort With quicksort, the average-case complexity was O(nlog2n) however the worst case complexity was still O(N*N). Mergesort improves on quicksort by always having a complexity of O(nlog2n) regardless of the best or worst case. So how does it do this? Mergesort makes use of the divide and conquer approach to partition a collection into two sub-collections. It then sorts each sub-collection and combines the sorted sub-collections into one sorted collection. The general algorithm for mergesort is as follows… Divide the collection into two sub-collections Mergesort the first sub-collection Mergesort the second sub-collection Merge the first sub-collection and the second sub-collection As you can see.. it still pretty much looks like quicksort – so lets see where it differs… Firstly, mergesort differs from quicksort in how it partitions the sub-collections. Instead of having a pivot – merge sort partitions each sub-collection based on size so that the first and second sub-collection of relatively the same size. This dividing keeps getting repeated until the sub-collections are the size of a single element. If a sub-collection is one element in size – it is now sorted! So the trick is how do we put all these sub-collections together so that they maintain their sorted order. Sorted sub-collections are merged into a sorted collection by comparing the elements of the sub-collection and then adjusting the sorted collection. Lets have a look at a few examples… Assume 2 sub-collections with 1 element each 10 & 20 Compare the first element of the first sub-collection with the first element of the second sub-collection. Take the smallest of the two and place it as the first element in the sorted collection. In this scenario 10 is smaller than 20 so 10 is taken from sub-collection 1 leaving that sub-collection empty, which means by default the next smallest element is in sub-collection 2 (20). So the sorted collection would be 10 20 Lets assume 2 sub-collections with 2 elements each 10 20 & 15 19 So… again we would Compare 10 with 15 – 10 is the winner so we add it to our sorted collection (10) leaving us with 20 & 15 19 Compare 20 with 15 – 15 is the winner so we add it to our sorted collection (10 15) leaving us with 20 & 19 Compare 20 with 19 – 19 is the winner so we add it to our sorted collection (10 15 19) leaving us with 20 & _ 20 is by default the winner so our sorted collection is 10 15 19 20. Make sense? Heapsort (still needs to be completed) So by now I am tired of sorting algorithms and trying to remember why they were so important. I think every year I go through this stuff I wonder to myself why are we made to learn about selection sort and insertion sort if they are so bad – why didn’t we just skip to Mergesort & Quicksort. I guess the only explanation I have for this is that sometimes you learn things so that you can implement them in future – and other times you learn things so that you know it isn’t the best way of implementing things and that you don’t need to implement it in future. Anyhow… luckily this is going to be the last one of my sorts for today. The first step in heapsort is to convert a collection of data into a heap. After the data is converted into a heap, sorting begins… So what is the definition of a heap? If we have to convert a collection of data into a heap, how do we know when it is a heap and when it is not? The definition of a heap is as follows: A heap is a list in which each element contains a key, such that the key in the element at position k in the list is at least as large as the key in the element at position 2k +1 (if it exists) and 2k + 2 (if it exists). Does that make sense? At first glance I’m thinking what the heck??? But then after re-reading my notes I see that we are doing something different – up to now we have really looked at data as an array or sequential collection of data that we need to sort – a heap represents data in a slightly different way – although the data is stored in a sequential collection, for a sequential collection of data to be in a valid heap – it is “semi sorted”. Let me try and explain a bit further with an example… Example 1 of Potential Heap Data Assume we had a collection of numbers as follows 1[1] 2[2] 3[3] 4[4] 5[5] 6[6] For this to be a valid heap element with value of 1 at position [1] needs to be greater or equal to the element at position [3] (2k +1) and position [4] (2k +2). So in the above example, the collection of numbers is not in a valid heap. Example 2 of Potential Heap Data Lets look at another collection of numbers as follows 6[1] 5[2] 4[3] 3[4] 2[5] 1[6] Is this a valid heap? Well… element with the value 6 at position 1 must be greater or equal to the element at position [3] and position [4]. Is 6 > 4 and 6 > 3? Yes it is. Lets look at element 5 as position 2. It must be greater than the values at [4] & [5]. Is 5 > 3 and 5 > 2? Yes it is. If you continued to examine this second collection of data you would find that it is in a valid heap based on the definition of a heap.

    Read the article

  • SOLVED Install MythTV & 11.10 on Lenovo S12 (Intel atom) with wireless

    - by keepitsimpleengineer
    This is how I installed Ubuntu 11.10 and MythTV client on my Lenovo S12 (Intel Atom) laptop and use it using WiFi (see additional notes at end). I did this because the upgrade from 11.04 bricked the laptop. Note that the partitions on the Lenovo standard disk were already in place for this installation. Also note that my LAN is setup for fixed IP addresses. Downloaded and burned 11.10 x86 Desktop Ubuntu CD Connected the power supply cord, LAN wire and the external DVD USB drive. Ran Windows XP and made sure performance level "Performance" was set and "Wireless" was enabled. Booted S12 from CD Disabled Networking from icon on upper left panel icon Edited Connections… "Wired connection 1" ? Set IP address, accepted default netmask and set gateway. Also set DNS server. Good idea to check "Connection Information" here to verify everything's O.K. Selected Install Ubuntu from the initial "Install" window Verified the three items were checked (required disk space available, plugged into a power source, & connected to the Internet) Selected Download updates while installing and third party software. Hit Continue… At wireless selected don't want to connect…WiFi…now. Continue… At Installation type, selected Something else. Continue… At partition tale, selected the ext4 Linux partition, set the mount point as "/", and marked for formatting. Here I selected the main disk (/sda) for installing the boot manager. Continue… Selected or verified my Time zone. Continue… Selected my keyboard layout. Continue… Filled in the who are you fields. Make sure password is required to sign in is checked. Continue… Chose a picture. Continue… I selected import no accounts. Continue… Wait as the Install creeps along. If your screen goes blank, tap the space bar ? apparently the screen saver/power plan does this. There are several progress bars. The longest was "Installing system", and it was the next to the last one. Installation Complete window appears, Restart Now… Wait as it stops, The screen blanks then the message "…remove…media…close tray…press enter" I just unplugged the USB DVD and hit enter… It was disheartening but the screen turned Ubuntu Purple-beige and nothing happened, so I help down the power key until it shut down, the pressed it again and the Grub Boot screen appeared. Select Ubuntu… 25.The screen went blank with the little flashing underscore cursor on it and the disk light would occasionally flash. I hit the enter key and eventuality Ubuntu started. After a somewhat long time the unity desktop appeared. 11.10, unlike earlier versions, retains the connection information. Check this by checking the network icon on the upper left applet panel. Here the touch-pad·mouse quit working and I had to reboot. It takes and extremely long time to boot, sometimes requiring several power off/ power on (cold boot). You can try to get the default network manager to work, but it might not, it didn't on mine for WiFi. Thanks to: Chris at URL here's what to do… disconnect your wired Internet connection. input your wireless information into network manager open a terminal (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "terminal". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. click to open a terminal, and type in: sudo rmmod acer_wmi && echo "blacklist acer_wmi" >> /etc/modprobe.d/blacklist.conf and hit enter. type in your password as asked. if you have correctly entered your WiFi information and you are near your AP, you should connect immediately if not, see the URL above ? you might need to replace "network manager" with "wicd" ? I did with 11.04. Update the new 11.10, in the upper left panel applet weird·gear icon is menu with a line about updating. It's the new way to invoke Update Manager. Your lenovo S12 (intel atom) should now run the new unity Ubuntu. Point your elbow at the ceiling and pat yourself on the back. Installing Mythbuntu Client 24.1 Open mythbuntu.org/repos (I urge you not to directly use Ubuntu Software Center for this) Install Mythbuntu Repos Save the file (in ~/Downloads, the default) Run the file ? it will update your repositories so that you will get the proper installation sources ? it will start Ubuntu Software Center to do this ? Click Install… You will need your password. Debconf window will open, select by making sure check mark is in the little box "Would you like to activate…". Forward… Which version? At the time of writing the current "Stable" version was 24.1, select 0.24.x… Forward… Read the message, then forward… Delete the downloaded file. Install synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "synaptic". Click on the synaptic icon. Ubuntu Software Center will open and allow you to install synaptic package manager. Open Synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "Synaptic". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. Run synaptic, read the intro, and close the intro window. Type in mythbuntu-control-centre in the Quick filter text box, and then select it "Mark for installation" by clicking on the box next to it's name. Marvel at the additional to be installed items, then select "?Mark"… At the top of the synaptic window click on the "? Apply" button. Marvel at the amount of stuff to be installed, the click on "Apply". When finished, close finished window and synaptic. Open mythbuntu-control-centre (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythbuntu". Might be a good idea to drag and drop the mythbuntu-control-centre icon to the terminal, it's easy to get rid of later. You can now configure and install the frontend. Go down the icon totem on the right side of the window and click as needed… System roles. ? No Backend, Desktop Frontend, and Ubuntu Desktop. Apply… & Apply changes… & Password… MySQL Configuration ? from backend ? Setup General Alt-N(ext) Alt-N(ext) Stetting Access Setup PIN code: ~~~~ Input Security key and click "Test Connection", if ?, then Apply… & Apply… {note: for some inexplicable reason, control centre hung on this, but when I restarted it, it was set properly} Graphics drivers, When I did this, only the Broadcom wireless driver showed up. I closed without doing anything. Services. I enabled SSH & Samba. Apply… & Apply… Repositories. Asked & Answered. MythExport. Pass, I believe it requires backend on the same system. Proprietary Codec Support. Check to enable, Apply… & Apply… System Updates. No action necessary, will be a part of the Ubuntu update mechanism. Themes and Artwork. For themes, I selected Enable/Update all. Apply… & Apply… Infrared & Startup behavior and Plugins. Defer until you know more. Close software centre. Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Incorrect Group Membership. Fix this by clicking "Yes"… Log out/end. Do this by clicking "Yes"… For my Lenovo S12, I had to manually restart Ubuntu - and still with the very long restart…/no start/cold boot/reboot/pressing the shift key required Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Will open with Select country & language. Do so. then get message with "No", hit "Ok" and arrive at the data base Configuration 1/2 screen. You will need your brackend password, from backend ? Setup General Database Configuration 1/2 Password:~? Enter this Hit Alt-n to go to the next page. Select "Use custom id…", then enter a custom ID, I use the machine's name. Hit finish, and MythTV should start up with all default settings. For the lenovo S12, the first thing you want to do is to set Playback profiles to "Normal". From Setup TV Settings Playback Alt-N(ext) Alt-N(ext) Playback Profiles (3/8) : Change Current Video Playback Profile to "Normal". You can fiddle with this setting later. For the lenovo S12, the second thing is to get the sound going. From Setup General Alt-N(ext) Alt-N(ext) Alt-N(ext) Audio System: The top of the screen is a button title "Scan for audio devices", move the highlight there and press the Space bar. Then Tab down to Audio Output Device: and left-right arrow until "ALSA:hw:Card=Intel,DEV=0" is selected. Then Alt-N(ext) until "Finish". Now you should have sound. You should now have MythTV working nicely on the Lenovo S12 Notes about wireless: Running Lenovo S12 on wireless is demanding on both power and WiFi connection. Best results will be obtained when running on power and wired connection. I run my S12 on wireless, actually two serial connections with two access points, something that is not easy to achieve. Here Mythbuntu client-server (in den) <? wireless link 1 <?office LAN? wireless link 2 <? Lenovo S12 Ubuntu 11.10 The office LAN is fixed IP behind an Untangle firewall router. There is another MythTV client on Ubuntu 10.10 computer in the office (which has always worked well). ProblemMythbuntu\Win7 client hangs with frozen frames, short segment of audio repeating. Hardware Rosewill RNX-G300EX IEEE 802.11b/g PCI Wireless Card on client-server 2 Linksys WRT54GL wireless broadband routers on LAN for link1 and link 2 WRT54GL FirmwareDD-WRT v24-sp2(07/22/09) voip set up to act as an access point. Note? many people advised this was an unworkable scheme, and in probably most cases it will be. Solution? Set up DD-WRT with the following Wireless settings… Basic Channel: Different fixed channels at least 4 difference, I use 6 & 11 Basic Sensitivity Range (ACK timing): 50 MAC filter use filter: Enable, Selected Permit only clients listed to access… Requires adding MAC addresses in "Edit MAC Filter List" This causes the 54GL's to ignore any but the listed MAC address, down side, no "guest" capability. Advanced Basic rate: All Advanced CTS Protection Mode: Off Advanced Frame Burst: Enable Advanced Max associate clients: 4 for client link 2, 1 for client-server link 1 Advanced AP isolation: Enable Advanced Preamble: Short Advanced Afterburner: On Advanced Wireless GUI access: Off Advanced WMM support: Off Other settings: default for supplied firmware. Why I suspect this worked? The 54GL Access Points's with the firmware's setting are set to handle a multiple client, wide area situation. With these mods I reconfigured them for a small area, few client situation, disabling Advanced WMM probably the most important. In addition, the client mythtv when used all other users of its access point are turned off except for a Skype phone. Also, the client-server is set up to allow other connections though it's LAN connection, and these are used to connect the TV and disc players, not used when client is being used.

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

  • CodePlex Daily Summary for Thursday, June 27, 2013

    CodePlex Daily Summary for Thursday, June 27, 2013Popular ReleasesV8.NET: V8.NET Beta Release v1.2.23.5: 1. Fixed some minor bugs. 2. Added the ability to reuse existing V8NativeObject types. Simply set the "Handle" property of the objects to a new native object handle (just write to it directly). Example: "myV8NativeObject.Handle = {V8Engine}.GlobalObject.Handle;" 3. Supports .NET 4.0! (forgot to compile against .NET 4.0 instead of 4.5, which is now fixed)Stored Procedure Pager: LYB.NET.SPPager 1.10: check bugs: 1 the total page count of default stored procedure ".LYBPager" always takes error as this: page size: 10 total item count: 100 then total page count should be 10, but last version is 11. 2 update some comments with English forbidding messy code.StyleMVVM: 3.0.3: This is a minor feature and bug fix release Features: SingletonAttribute - Shared(Permanently=true) has been obsoleted in favor of SingletonAttribute StyleMVVMServiceLocator - releasing new nuget package that implements the ServiceLocator pattern Bug Fixes: Fixed problem with ModuleManager when used in not XAML application Fixed problem with nuget packages in MVC & WCF project templatesWinRT by Example Sample Applications: Chapters 1 - 10: Source code for chapters 1 - 9 with tech edits for chapters 1 - 5.Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...xFunc: xFunc 2.3: Improved the Simplify method; Rewrote the Differentiate method in the Derivative class; Changed API; Added project for .Net 2.0/3.0/3.5/4.5/Portable;Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for WAMS. Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expecting only finite values. (...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Compare .NET Objects: Version 1.7.2.0: If you like it, please rate it. :) Performance Improvements Fix for deleted row in a data table Added ability to ignore the collection order Fix for Ignoring by AttributesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???New ProjectsAriana - .NET Userkit: Ariana Userkit is a library which will allow a file to persist after being executed. It can allow manipulating processes, registry keys, directories etc.Baidu Map: Baidu MapBTC Trader: BTC Trader is a tool for the automated and manual trading on BTC exchanges.Cafe Software Management: This is software for SokulChangix.Sql: Very, very simple ORM framework =)Composite WPF Extensions: Extensions to Composite Client Application Library for WPFCross Site Collection Search Configurator: A solution for SharePoint to allow you to share your site collection based search configuration between site collections in the same web application.CSS Extractor: Attempts to extract inline css styles from a HTML file and put them into a separate CSS fileCSS600Summer2013: This is a project for students enrolled in CSS 600 at University of Washington, Bothell campus.Easy Backup Application: Easy Backup Application - application for easy scheduled backup local or network folders.Easy Backup Windows Service: Easy Backup Windows Service - application for easy scheduled backup local or network folders.fangxue: huijiaGamelight: Gamelight is a 2D game library for Silverlight. It features 2D graphics manipulation to make the creation of games that don't adhere to traditional UI (such as sprite-based games) much easier. It also streamlines sound and input management. General Method: General MethodICE - Interface Communications Engine: ICE is an engine which makes it easy to create .NET plugins for running on servers for the purpose of communication between systems.LogWriterReader using Named pipe: LogWriterReader using Named pipeLu 6.2 Viewer: Tool for analysing LU62 strings. Visualisation of node lists in tree structure, it offers the possability to compare two lu62 strings in seperate windows and visualise the differences between the two segments selected. Fw 2.0 developed by Riccardo di Nuzzo (http://www.dinuzzo.it)Math.Net PowerShell: PowerShell environment for Math.Net Numerics library, defines a few cmdlets that allow to create and manipulate matrices using a straightforward syntax.mycode: my codeNFC#: NFCSharp is a project aimed at building a native C# support for NFC tag manipulation.NJquery: jquery???????????????Page Pro Image Viewer and Twain: Low level image viewer Proyecto Ferreteria "CristoRey": FirstRobótica e Automação: Sistema desenvolvido por Alessandro José Segura de Oliveira - MERodolpho Brock Projects: A MVC 'like' start application for MSVS2010 - Microsoft Visual Studio 2010 UIL - User Interface Layer : <UIB - User Interface Base> - GUI: Graphic User Interface - WEB: ASP.NET MVC BLL - Business Logic Layer : <BLB - Business Logic Base> - Only one core!!! DAL - Data Access Layer : <DAB - Data Access Base> - SQL Server - PostgreSQL ORM - Object-Relational Mapping <Object-Relational Base> - ADO.NET Entity FrameworkSilverlight Layouts: Silverlight Layouts is a project for controls that behave as content placeholders with pre-defined GUI layout for some of common scenarios: - frozen headers, - frozen columns, - cyrcle layouts etc.Simple Logging Façade for C++: Port for Simple Logging Façade for C++, with a Java equivalent known as slf4j. SQL Server Data Compare: * Compare the *data* in two databases record by record. * Generate *html compare results* with interactive interface. * Generate *T-SQL synchronization scripts*UserVisitLogsHelp: UserVisitLogsHelp is Open-source the log access records HttpModule extension Components .The component is developed with C#,it's a custom HttpModule component.V5Soft: webcome v5softvcseVBA - version controlled software engineering in VBA: Simplify distributed application development for Visual Basic for Applications with this Excel AddinVersionizer: Versionizer is a configuration management command-line tool developed using C# .NET for editing AssemblyInfo files of Microsoft .NET projects.WebsiteFilter: WebsiteFilter is an open source asp.net web site to access IP address filtering httpmodule components. Wordion: Wordion is a tool for foreign language learners to memorize words.????????: ????????-?????????,??@???? ????。1、????????????,2、??????????????(????,???????,“?? ??”),3、?????>??>>????5???,4、??"@?/?"?Excel??????@??,5、??“??@",???????@ 。????QQ:6763745

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • What is crtbegin.o and crtbegin_dynamic.o?

    - by theactiveactor
    When debugging a link error (undefined reference to _dso_handle) using the Android x86 toolchain, I noticed it's statically linking crtbegin_dynamic.o. What is the purpose of this file? There is another similar crtbegin.o in the toolchain install directory that contains the missing symbol (_dso_handle). What is the difference between crtbegin.o and crtbegin_dynamic.o?

    Read the article

  • What alignment does HeapAlloc use

    - by Koert
    I'm developing a general purpose library which uses Win32's HeapAlloc MSDN doesn't mention alignment guarantees for Win32's HeapAlloc, but I really need to know what alignment it uses, so I can avoid excessive padding. On my machine (vista, x86), all allocations are aligned at 8 bytes. Is this true for other platforms as well?

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >