Search Results

Search found 21777 results on 872 pages for 'howard may'.

Page 37/872 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • How to fix "ruby installation is missing psych (for YAML output)." on CentOS?

    - by ohho
    After rvm installation on CentOS 5.8: [rails@localhost ~]$ rvm -v rvm 1.16.17 [rails@localhost ~]$ which ruby ~/.rvm/rubies/ruby-1.9.3-p286/bin/ruby [rails@localhost ~]$ ruby -v ruby 1.9.3p286 (2012-10-12 revision 37165) [i686-linux] [rails@localhost ~]$ which gem ~/.rvm/rubies/ruby-1.9.3-p286/bin/gem there is a warning: $ gem -v /home/rails/.rvm/rubies/ruby-1.9.3-p286/lib/ruby/1.9.1/yaml.rb:56:in `<top (required)>': It seems your ruby installation is missing psych (for YAML output). To eliminate this warning, please install libyaml and reinstall your ruby. 1.8.24 I followed some advice: $ rvm pkg install libyaml Fetching yaml-0.1.4.tar.gz to /home/rails/.rvm/archives Extracting yaml-0.1.4.tar.gz to /home/rails/.rvm/src Prepare yaml in /home/rails/.rvm/src/yaml-0.1.4. Configuring yaml in /home/rails/.rvm/src/yaml-0.1.4. Compiling yaml in /home/rails/.rvm/src/yaml-0.1.4. Installing yaml to /home/rails/.rvm/usr Please note that it's required to reinstall all rubies: rvm reinstall all --force and then: $ rvm reinstall all --force Removing /home/rails/.rvm/src/ruby-1.8.7-p371... Removing /home/rails/.rvm/rubies/ruby-1.8.7-p371... No binary rubies available for: centos/5.8/i386/ruby-1.8.7-p371. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Installing Ruby from source to: /home/rails/.rvm/rubies/ruby-1.8.7-p371, this may take a while depending on your cpu(s)... ruby-1.8.7-p371 - #downloading ruby-1.8.7-p371, this may take a while depending on your connection... ruby-1.8.7-p371 - #extracting ruby-1.8.7-p371 to /home/rails/.rvm/src/ruby-1.8.7-p371 ruby-1.8.7-p371 - #extracted to /home/rails/.rvm/src/ruby-1.8.7-p371 Applying patch /home/rails/.rvm/patches/ruby/1.8.7/stdout-rouge-fix.patch Applying patch /home/rails/.rvm/patches/ruby/1.8.7/no_sslv2.diff ruby-1.8.7-p371 - #configuring ruby-1.8.7-p371 - #compiling ruby-1.8.7-p371 - #installing Removing old Rubygems files... Installing rubygems-1.8.24 for ruby-1.8.7-p371 ... Installation of rubygems completed successfully. Saving wrappers to '/home/rails/.rvm/bin'. ruby-1.8.7-p371 - #adjusting #shebangs for (gem irb erb ri rdoc testrb rake). ruby-1.8.7-p371 - #importing default gemsets (/home/rails/.rvm/gemsets/) Install of ruby-1.8.7-p371 - #complete Please be aware that you just installed a ruby that requires 2 patches just to be compiled on up to date linux system. This may have known and unaccounted for security vulnerabilities. Please consider upgrading to Ruby 1.9.3-286 which will have all of the latest security patches. Making gemset ruby-1.8.7-p371 pristine. Making gemset ruby-1.8.7-p371@global pristine. Removing /home/rails/.rvm/src/ruby-1.9.3-p286... Removing /home/rails/.rvm/rubies/ruby-1.9.3-p286... No binary rubies available for: centos/5.8/i386/ruby-1.9.3-p286. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Installing Ruby from source to: /home/rails/.rvm/rubies/ruby-1.9.3-p286, this may take a while depending on your cpu(s)... ruby-1.9.3-p286 - #downloading ruby-1.9.3-p286, this may take a while depending on your connection... ruby-1.9.3-p286 - #extracting ruby-1.9.3-p286 to /home/rails/.rvm/src/ruby-1.9.3-p286 ruby-1.9.3-p286 - #extracted to /home/rails/.rvm/src/ruby-1.9.3-p286 ruby-1.9.3-p286 - #configuring ruby-1.9.3-p286 - #compiling ruby-1.9.3-p286 - #installing Removing old Rubygems files... Installing rubygems-1.8.24 for ruby-1.9.3-p286 ... Installation of rubygems completed successfully. Saving wrappers to '/home/rails/.rvm/bin'. ruby-1.9.3-p286 - #adjusting #shebangs for (gem irb erb ri rdoc testrb rake). ruby-1.9.3-p286 - #importing default gemsets (/home/rails/.rvm/gemsets/) Install of ruby-1.9.3-p286 - #complete Making gemset ruby-1.9.3-p286 pristine. Making gemset ruby-1.9.3-p286@global pristine. Too bad, the warning is still there: $ gem -v /home/rails/.rvm/rubies/ruby-1.9.3-p286/lib/ruby/1.9.1/yaml.rb:56:in `<top (required)>': It seems your ruby installation is missing psych (for YAML output). To eliminate this warning, please install libyaml and reinstall your ruby. 1.8.24 How can I get rid of the warning? UPDATE: (applying rvm reinstall 1.9.3 --movable) $ rvm reinstall 1.9.3 --movable Removing /home/rails/.rvm/src/ruby-1.9.3-p286... Removing /home/rails/.rvm/rubies/ruby-1.9.3-p286... Fetching yaml-0.1.4.tar.gz to /home/rails/.rvm/archives Extracting yaml-0.1.4.tar.gz to /home/rails/.rvm/src Prepare yaml in /home/rails/.rvm/src/yaml-0.1.4. Configuring yaml in /home/rails/.rvm/src/yaml-0.1.4. Compiling yaml in /home/rails/.rvm/src/yaml-0.1.4. Installing yaml to /home/rails/.rvm/rubies/ruby-1.9.3-p286 Installing Ruby from source to: /home/rails/.rvm/rubies/ruby-1.9.3-p286, this may take a while depending on your cpu(s)... ruby-1.9.3-p286 - #downloading ruby-1.9.3-p286, this may take a while depending on your connection... ruby-1.9.3-p286 - #extracting ruby-1.9.3-p286 to /home/rails/.rvm/src/ruby-1.9.3-p286 ruby-1.9.3-p286 - #extracted to /home/rails/.rvm/src/ruby-1.9.3-p286 Applying patch /home/rails/.rvm/patches/ruby/1.9.3/ruby-multilib.patch Error running 'patch -F 25 -p1 -N -f -i /home/rails/.rvm/patches/ruby/1.9.3/ruby-multilib.patch', please read /home/rails/.rvm/log/ruby-1.9.3-p286/patch.apply.ruby-multilib.log There has been an error applying the specified patches. Halting the installation. Making gemset ruby-1.9.3-p286 pristine. Making gemset ruby-1.9.3-p286@global pristine.

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • Scripting with the Sun ZFS Storage 7000 Appliance

    - by Geoff Ongley
    The Sun ZFS Storage 7000 appliance has a user friendly and easy to understand graphical web based interface we call the "BUI" or "Browser User Interface".This interface is very useful for many tasks, but in some cases a script (or workflow) may be more appropriate, such as:Repetitive tasksTasks which work on (or obtain information about) a large number of shares or usersTasks which are triggered by an alert threshold (workflows)Tasks where you want a only very basic input, but a consistent output (workflows)The appliance scripting language is based on ECMAscript 3 (close to javascript). I'm not going to cover ECMAscript 3 in great depth (I'm far from an expert here), but I would like to show you some neat things you can do with the appliance, to get you started based on what I have found from my own playing around.I'm making the assumption you have some sort of programming background, and understand variables, arrays, functions to some extent - but of course if something is not clear, please let me know so I can fix it up or clarify it.Variable Declarations and ArraysVariablesECMAScript is a dynamically and weakly typed language. If you don't know what that means, google is your friend - but at a high level it means we can just declare variables with no specific type and on the fly.For example, I can declare a variable and use it straight away in the middle of my code, for example:projects=list();Which makes projects an array of values that are returned from the list(); function (which is usable in most contexts). With this kind of variable, I can do things like:projects.length (this property on array tells you how many objects are in it, good for for loops etc). Alternatively, I could say:projects=3;and now projects is just a simple number.Should we declare variables like this so loosely? In my opinion, the answer is no - I feel it is a better practice to declare variables you are going to use, before you use them - and given them an initial value. You can do so as follows:var myVariable=0;To demonstrate the ability to just randomly assign and change the type of variables, you can create a simple script at the cli as follows (bold for input):fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("Number of projects is: %d\n",projects.length);("." to run)> projects=152;("." to run)> printf("Value of the projects variable as an integer is now: %d\n",projects);("." to run)> .Number of projects is: 7Value of the projects variable as an integer is now: 152You can also confirm this behaviour by checking the typeof variable we are dealing with:fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> projects=152;("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> .var projects is of type objectvar projects is of type numberArraysSo you likely noticed that we have already touched on arrays, as the list(); (in the shares context) stored an array into the 'projects' variable.But what if you want to declare your own array? Easy! This is very similar to Java and other languages, we just instantiate a brand new "Array" object using the keyword new:var myArray = new Array();will create an array called "myArray".A quick example:fishy10:> script("." to run)> testArray = new Array();("." to run)> testArray[0]="This";("." to run)> testArray[1]="is";("." to run)> testArray[2]="just";("." to run)> testArray[3]="a";("." to run)> testArray[4]="test";("." to run)> for (i=0; i < testArray.length; i++)("." to run)> {("." to run)>    printf("Array element %d is %s\n",i,testArray[i]);("." to run)> }("." to run)> .Array element 0 is ThisArray element 1 is isArray element 2 is justArray element 3 is aArray element 4 is testWorking With LoopsFor LoopFor loops are very similar to those you will see in C, java and several other languages. One of the key differences here is, as you were made aware earlier, we can be a bit more sloppy with our variable declarations.The general way you would likely use a for loop is as follows:for (variable; test-case; modifier for variable){}For example, you may wish to declare a variable i as 0; and a MAX_ITERATIONS variable to determine how many times this loop should repeat:var i=0;var MAX_ITERATIONS=10;And then, use this variable to be tested against some case existing (has i reached MAX_ITERATIONS? - if not, increment i using i++);for (i=0; i < MAX_ITERATIONS; i++){ // some work to do}So lets run something like this on the appliance:fishy10:> script("." to run)> var i=0;("." to run)> var MAX_ITERATIONS=10;("." to run)> for (i=0; i < MAX_ITERATIONS; i++)("." to run)> {("." to run)>    printf("The number is %d\n",i);("." to run)> }("." to run)> .The number is 0The number is 1The number is 2The number is 3The number is 4The number is 5The number is 6The number is 7The number is 8The number is 9While LoopWhile loops again are very similar to other languages, we loop "while" a condition is met. For example:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n",counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Counter is 10Loop has ended and Counter is 11So what do we notice here? Something has actually gone wrong - counter will technically be 11 once the loop completes... Why is this?Well, if we have a loop like this, where the 'while' condition that will end the loop may be set based on some other condition(s) existing (such as the counter has reached 10) - we must ensure that we  terminate this iteration of the loop when the condition is met - otherwise the rest of the code will be followed which may not be desirable. In other words, like in other languages, we will only ever check the loop condition once we are ready to perform the next iteration, so any other code after we set "isTen" to be true, will still be executed as we can see it was above.We can avoid this by adding a break into our loop once we know we have set the condition - this will stop the rest of the logic being processed in this iteration (and as such, counter will not be incremented). So lets try that again:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>            break;("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n", counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Loop has ended and Counter is 10Much better!Methods to Obtain and Manipulate DataGet MethodThe get method allows you to get simple properties from an object, for example a quota from a user. The syntax is fairly simple:var myVariable=get('property');An example of where you may wish to use this, is when you are getting a bunch of information about a user (such as quota information when in a shares context):var users=list();for(k=0; k < users.length; k++){     user=users[k];     run('select ' + user);     var username=get('name');     var usage=get('usage');     var quota=get('quota');...Which you can then use to your advantage - to print or manipulate infomation (you could change a user's information with a set method, based on the information returned from the get method). The set method is explained next.Set MethodThe set method can be used in a simple manner, similar to get. The syntax for set is:set('property','value'); // where value is a string, if it was a number, you don't need quotesFor example, we could set the quota on a share as follows (first observing the initial value):fishy10:shares default/test-geoff> script("." to run)> var currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> set('quota','30G');("." to run)> run('commit');("." to run)> currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> .Current Quota is: 0Current Quota is: 32212254720This shows us using both the get and set methods as can be used in scripts, of course when only setting an individual share, the above is overkill - it would be much easier to set it manually at the cli using 'set quota=3G' and then 'commit'.List MethodThe list method can be very powerful, especially in more complex scripts which iterate over large amounts of data and manipulate it if so desired. The general way you will use list is as follows:var myVar=list();Which will make "myVar" an array, containing all the objects in the relevant context (this could be a list of users, shares, projects, etc). You can then gather or manipulate data very easily.We could list all the shares and mountpoints in a given project for example:fishy10:shares another-project> script("." to run)> var shares=list();("." to run)> for (i=0; i < shares.length; i++)("." to run)> {("." to run)>    run('select ' + shares[i]);("." to run)>    var mountpoint=get('mountpoint');("." to run)>    printf("Share %s discovered, has mountpoint %s\n",shares[i],mountpoint);("." to run)>    run('done');("." to run)> }("." to run)> .Share and-another discovered, has mountpoint /export/another-project/and-anotherShare another-share discovered, has mountpoint /export/another-project/another-shareShare bob discovered, has mountpoint /export/another-projectShare more-shares-for-all discovered, has mountpoint /export/another-project/more-shares-for-allShare yep discovered, has mountpoint /export/another-project/yepWriting More Complex and Re-Usable CodeFunctionsThe best way to be able to write more complex code is to use functions to split up repeatable or reusable sections of your code. This also makes your more complex code easier to read and understand for other programmers.We write functions as follows:function functionName(variable1,variable2,...,variableN){}For example, we could have a function that takes a project name as input, and lists shares for that project (assuming we're already in the 'project' context - context is important!):function getShares(proj){        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                printf("Discovered share: %s\n",shares[i]);        }        run('done'); // exit selected project}Commenting your CodeLike any other language, a large part of making it readable and understandable is to comment it. You can use the same comment style as in C and Java amongst other languages.In other words, sngle line comments use://at the beginning of the comment.Multi line comments use:/*at the beginning, and:*/ at the end.For example, here we will use both:fishy10:> script("." to run)> // This is a test comment("." to run)> printf("doing some work...\n");("." to run)> /* This is a multi-line("." to run)> comment which I will span across("." to run)> three lines in total */("." to run)> printf("doing some more work...\n");("." to run)> .doing some work...doing some more work...Your comments do not have to be on their own, they can begin (particularly with single line comments this is handy) at the end of a statement, for examplevar projects=list(); // The variable projects is an array containing all projects on the system.Try and Catch StatementsYou may be used to using try and catch statements in other languages, and they can (and should) be utilised in your code to catch expected or unexpected error conditions, that you do NOT wish to stop your code from executing (if you do not catch these errors, your script will exit!):try{  // do some work}catch(err) // Catch any error that could occur{ // do something here under the error condition}For example, you may wish to only execute some code if a context can be reached. If you can't perform certain actions under certain circumstances, that may be perfectly acceptable.For example if you want to test a condition that only makes sense when looking at a SMB/NFS share, but does not make sense when you hit an iscsi or FC LUN, you don't want to stop all processing of other shares you may not have covered yet.For example we may wish to obtain quota information on all shares for all users on a share (but this makes no sense for a LUN):function getShareQuota(shar) // Get quota for each user of this share{        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","----");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}Running Scripts Remotely over SSHAs you have likely noticed, writing and running scripts for all but the simplest jobs directly on the appliance is not going to be a lot of fun.There's a couple of choices on what you can do here:Create scripts on a remote system and run them over sshCreate scripts, wrapping them in workflow code, so they are stored on the appliance and can be triggered under certain circumstances (like a threshold being reached)We'll cover the first one here, and then cover workflows later on (as these are for the most part just scripts with some wrapper information around them).Creating a SSH Public/Private SSH Key PairLog on to your handy Solaris box (You wouldn't be using any other OS, right? :P) and use ssh-keygen to create a pair of ssh keys. I'm storing this separate to my normal key:[geoff@lightning ~] ssh-keygen -t rsa -b 1024Generating public/private rsa key pair.Enter file in which to save the key (/export/home/geoff/.ssh/id_rsa): /export/home/geoff/.ssh/nas_key_rsaEnter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /export/home/geoff/.ssh/nas_key_rsa.Your public key has been saved in /export/home/geoff/.ssh/nas_key_rsa.pub.The key fingerprint is:7f:3d:53:f0:2a:5e:8b:2d:94:2a:55:77:66:5c:9b:14 geoff@lightningInstalling the Public Key on the ApplianceOn your Solaris host, observe the public key:[geoff@lightning ~] cat .ssh/nas_key_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc= geoff@lightningNow, copy and paste everything after "ssh-rsa" and before "user@hostname" - in this case, geoff@lightning. That is, this bit:AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc=Logon to your appliance and get into the preferences -> keys area for this user (root):[geoff@lightning ~] ssh [email protected]: Last login: Mon Dec  6 17:13:28 2010 from 192.168.0.2fishy10:> configuration usersfishy10:configuration users> select rootfishy10:configuration users root> preferences fishy10:configuration users root preferences> keysOR do it all in one hit:fishy10:> configuration users select root preferences keysNow, we create a new public key that will be accepted for this user and set the type to RSA:fishy10:configuration users root preferences keys> createfishy10:configuration users root preferences key (uncommitted)> set type=RSASet the key itself using the string copied previously (between ssh-rsa and user@host), and set the key ensuring you put double quotes around it (eg. set key="<key>"):fishy10:configuration users root preferences key (uncommitted)> set key="AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc="Now set the comment for this key (do not use spaces):fishy10:configuration users root preferences key (uncommitted)> set comment="LightningRSAKey" Commit the new key:fishy10:configuration users root preferences key (uncommitted)> commitVerify the key is there:fishy10:configuration users root preferences keys> lsKeys:NAME     MODIFIED              TYPE   COMMENT                                  key-000  2010-10-25 20:56:42   RSA    cycloneRSAKey                           key-001  2010-12-6 17:44:53    RSA    LightningRSAKey                         As you can see, we now have my new key, and a previous key I have created on this appliance.Running your Script over SSH from a Remote SystemHere I have created a basic test script, and saved it as test.ecma3:[geoff@lightning ~] cat test.ecma3 script// This is a test script, By Geoff Ongley 2010.printf("Testing script remotely over ssh\n");.Now, we can run this script remotely with our keyless login:[geoff@lightning ~] ssh -i .ssh/nas_key_rsa root@fishy10 < test.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Testing script remotely over sshPutting it Together - An Example Completed Quota Gathering ScriptSo now we have a lot of the basics to creating a script, let us do something useful, like, find out how much every user is using, on every share on the system (you will recognise some of the code from my previous examples): script/************************************** Quick and Dirty Quota Check script ** Written By Geoff Ongley            ** 25 October 2010                    **************************************/function getUserQuota(usr){        run('select ' + usr);        var username=get('name');        var usage=get('usage');        var quota=get('quota');        var usage_g=usage / 1073741824; // convert bytes to gigabytes        var quota_g=quota / 1073741824; // as above        var quota_percent=0        if (quota > 0)        {                quota_percent=(usage / quota)*(100/1);        }        printf("    %20s        %8.2f           %8.2f           %d%%\n",username,usage_g,quota_g,quota_percent);        run('done'); // done with this selected user}function getShareQuota(shar){        //printf("DEBUG: selecting share %s\n", shar);        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","--------");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}function getShares(proj){        //printf("DEBUG: selecting project %s\n",proj);        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                share=shares[j];                getShareQuota(share);        }        run('done'); // exit selected project}function getProjects(){        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                var project=projects[i];                getShares(project);        }        run('done'); // exit context for all projects}getProjects();.Which can be run as follows, and will print information like this:[geoff@lightning ~/FISHWORKS_SCRIPTS] ssh -i ~/.ssh/nas_key_rsa root@fishy10 < get_quota_utilisation.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Project: another-project  SHARE: and-another                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                 geoffro            0.05            0.00        0%                   Billy            0.10            0.00        0%                    root            0.00            0.00        0%            testing-user            0.05            0.00        0%  SHARE: another-share                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                 geoffro            0.05            0.49        9%            testing-user            0.05            0.02        249%                   Billy            0.10            0.29        33%  SHARE: bob                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%  SHARE: more-shares-for-all                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                   Billy            0.10            0.00        0%            testing-user            0.05            0.00        0%                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%                 geoffro            0.05            0.00        0%  SHARE: yep                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                   Billy            0.10            0.01        999%            testing-user            0.05            0.49        9%                 geoffro            0.05            0.00        0%Project: default  SHARE: Test-LUN    SKIPPING Test-LUN - This is NOT a NFS or CIFs share, not looking for users  SHARE: test-geoff                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                 geoffro            0.05            0.00        0%                    root            3.18           10.00        31%                    uucp            0.00            0.00        0%                  nobody            0.59            0.49        119%^CKilled by signal 2.Creating a WorkflowWorkflows are scripts that we store on the appliance, and can have the script execute either on request (even from the BUI), or on an event such as a threshold being met.Workflow BasicsA workflow allows you to create a simple process that can be executed either via the BUI interface interactively, or by an alert being raised (for some threshold being reached, for example).The basics parameters you will have to set for your "workflow object" (notice you're creating a variable, that embodies ECMAScript) are as follows (parameters is optional):name: A name for this workflowdescription: A Description for the workflowparameters: A set of input parameters (useful when you need user input to execute the workflow)execute: The code, the script itself to execute, which will be function (parameters)With parameters, you can specify things like this (slightly modified sample taken from the System Administration Guide):          ...parameters:        variableParam1:         {                             label: 'Name of Share',                             type: 'String'                  },                  variableParam2                  {                             label: 'Share Size',                             type: 'size'                  },execute: ....};  Note the commas separating the sections of name, parameters, execute, and so on. This is important!Also - there is plenty of properties you can set on the parameters for your workflow, these are described in the Sun ZFS Storage System Administration Guide.Creating a Basic Workflow from a Basic ScriptTo make a basic script into a basic workflow, you need to wrap the following around your script to create a 'workflow' object:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function() {// (basic script goes here, minus the "script" at the beginning, and "." at the end)}};However, it appears (at least in my experience to date) that the workflow object may only be happy with one function in the execute parameter - either that or I'm doing something wrong. As far as I can tell, after execute: you should only have a basic one function context like so:execute: function(){}To deal with this, and to give an example similar to our script earlier, I have created another simple quota check, to show the same basic functionality, but in a workflow format:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function () {        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                run('select ' + projects[i]);                shares=list('filesystem');                printf("Project: %s\n", projects[i]);                for(j=0; j < shares.length; j++)                {                        run('select ' +shares[j]);                        try                        {                                run('users');                                printf("  SHARE: %s\n", shares[j]);                                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","-------");                                users=list();                                for(k=0; k < users.length; k++)                                {                                        run('select ' + users[k]);                                        username=get('name');                                        usage=get('usage');                                        quota=get('quota');                                        usage_g=usage / 1073741824; // convert bytes to gigabytes                                        quota_g=quota / 1073741824; // as above                                        quota_percent=0                                        if (quota > 0)                                        {                                                quota_percent=(usage / quota)*(100/1);                                        }                                        printf("    %20s        %8.2f   %8.2f   %d%%\n",username,usage_g,quota_g,quota_percent);                                        run('done');                                }                                run('done'); // exit user context                        }                        catch(err)                        {                        //      printf("    %s is a LUN, Not looking for users\n", shares[j]);                        }                        run('done'); // exit selected share context                }                run('done'); // exit project context        }        }};SummaryThe Sun ZFS Storage 7000 Appliance offers lots of different and interesting features to Sun/Oracle customers, including the world renowned Analytics. Hopefully the above will help you to think of new creative things you could be doing by taking advantage of one of the other neat features, the internal scripting engine!Some references are below to help you continue learning more, I'll update this post as I do the same! Enjoy...More information on ECMAScript 3A complete reference to ECMAScript 3 which will help you learn more of the details you may be interested in, can be found here:http://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262,%203rd%20edition,%20December%201999.pdfMore Information on Administering the Sun ZFS Storage 7000The Sun ZFS Storage 7000 System Administration guide can be a useful reference point, and can be found here:http://wikis.sun.com/download/attachments/186238602/2010_Q3_2_ADMIN.pdf

    Read the article

  • Is there any difference between null and 0 when assigning to pointers in unsafe code?

    - by Eloff
    This may seem odd, but in C (size_t)(void*)0 == 0 is not guaranteed by the language spec. Compilers are allowed to use any value they want for null (although they almost always use 0.) In C#, you can assign null or (T*)0 to a pointer in unsafe code. Is there any difference? (long)(void*)0 == 0 (guaranteed or not? put another way: IntPtr.Zero.ToInt64() == 0) MSDN has this to say about IntPtr.Zero: "The value of this field is not equivalent to null." Well if you want to be compatible with C code, that makes a lot of sense - it'd be worthless for interop if it didn't convert to a C null pointer. But I want to know if IntPtr.Zero.ToInt64() == 0 which may be possible, even if internally IntPtr.Zero is some other value (the CLR may or may not convert null to 0 in the cast operation) Not a duplicate of this question

    Read the article

  • Create Jinja2 macros that put content in separate places

    - by Brian M. Hunt
    I want to create a table of contents and endnotes in a Jinja2 template. How can one accomplish these tasks? For example, I want to have a template as follows: {% block toc %} {# ... the ToC goes here ... #} {% endblock %} {% include "some other file with content.jnj" %} {% block endnotes %} {# ... the endnotes go here ... #} {% endblock %} Where the some other file with content.jnj has content like this: {% section "One" %} Title information for Section One (may be quite long); goes in Table of Contents ... Content of section One {% section "Two" %} Title information of Section Two (also may be quite long) <a href="#" id="en1">EndNote 1</a> <script type="text/javsacript">...(may be reasonably long) </script> {# ... Everything up to here is included in the EndNote #} Where I say "may be quite/reasonably long" I mean to say that it can't reasonably be put into quotes as an argument to a macro or global function. I'm wondering if there's a pattern for this that may accommodate this, within the framework of Jinja2. My initial thought is to create an extension, so that one can have a block for sections and end-notes, like-so: {% section "One" %} Title information goes here. {% endsection %} {% endnote "one" %} <a href="#">...</a> <script> ... </script> {% endendnote %} Then have global functions (that pass in the Jinja2 Environment): {{ table_of_contents() }} {% include ... %} {{ endnotes() }} However, while this will work for endnotes, I'd presume it requires a second pass by something for the table of contents. Thank you for reading. I'd be much obliged for your thoughts and input. Brian

    Read the article

  • CGContextFillEllipseInRect: invalid context

    - by hakewake
    Hi all, when I launch my app, I keep getting this CGContextFillEllipseInRect: invalid context error. My app is simply to make the circle 'pinchable'. The debugger shows me this: [Session started at 2010-05-23 18:21:25 +0800.] 2010-05-23 18:21:27.140 Erase[8045:207] I'm being redrawn. Sun May 23 18:21:27 Sidwyn-Kohs-MacBook-Pro.local Erase[8045] <Error>: CGContextFillEllipseInRect: invalid context 2010-05-23 18:21:27.144 Erase[8045:207] New value of counter is 1 2010-05-23 18:21:27.144 Erase[8045:207] I'm being redrawn. Sun May 23 18:21:27 Sidwyn-Kohs-MacBook-Pro.local Erase[8045] <Error>: CGContextFillEllipseInRect: invalid context 2010-05-23 18:21:27.144 Erase[8045:207] New value of counter is 2 2010-05-23 18:21:27.144 Erase[8045:207] I'm being redrawn. Sun May 23 18:21:27 Sidwyn-Kohs-MacBook-Pro.local Erase[8045] <Error>: CGContextFillEllipseInRect: invalid context 2010-05-23 18:21:27.144 Erase[8045:207] New value of counter is 3 2010-05-23 18:21:27.145 Erase[8045:207] I'm being redrawn. Sun May 23 18:21:27 Sidwyn-Kohs-MacBook-Pro.local Erase[8045] <Error>: CGContextFillEllipseInRect: invalid context 2010-05-23 18:21:27.145 Erase[8045:207] New value of counter is 4 2010-05-23 18:21:27.148 Erase[8045:207] I'm being redrawn. Sun May 23 18:21:27 Sidwyn-Kohs-MacBook-Pro.local Erase[8045] <Error>: CGContextFillEllipseInRect: invalid context 2010-05-23 18:21:27.149 Erase[8045:207] New value of counter is 5 2010-05-23 18:21:27.150 Erase[8045:207] I'm being redrawn. Sun May 23 18:21:27 Sidwyn-Kohs-MacBook-Pro.local Erase[8045] <Error>: CGContextFillEllipseInRect: invalid context 2010-05-23 18:21:27.150 Erase[8045:207] New value of counter is 6 My implementation file is: // // ImageView.m // Erase // #import "ImageView.h" #import "EraseViewController.h" @implementation ImageView -(void)setNewRect:(CGRect)anotherRect{ newRect = anotherRect; } - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { newRect = CGRectMake(0,0,0,0); ref = UIGraphicsGetCurrentContext(); } return self; } - (void)drawRect:(CGRect)rect { static int counter = 0; CGRect veryNewRect = CGRectMake(30.0, 210.0, 60.0, 60.0); NSLog(@"I'm being redrawn."); if (counter == 0){ CGContextFillEllipseInRect(ref, veryNewRect); } else{ CGContextFillEllipseInRect(ref, rect); } counter++; NSLog(@"New value of counter is %d", counter); } - (void)setNeedsDisplay{ [super setNeedsDisplay]; [self drawRect:newRect]; } - (void)dealloc { [super dealloc]; } @end I've got two questions: 1) Why is it updating counter 6 times? I removed the line [super setNeedsDisplay]; but it becomes 4 times. 2) What is that invalid context error? Thanks guys.

    Read the article

  • Different Flavors of Leases Back On

    - by Theresa Hickman
    Given the continued interest regarding the proposed changes to Lease Accounting, I decided to write another entry on this controversial topic with colorful commentary from our resident accounting expert, Seamus Moran. Background (A History Lesson) Back in 1976, the FASB issued FAS 13, “Accounting for Leases” that permitted leases to be either an operating lease or capital (finance) lease. In substance, operating leases are a form of off-balance sheet financing. According to Seamus, operating leases date back to the launch of the Boeing 707 in the 1950s.  Because the aircraft was so much more expensive than previous aircrafts, the industry came up with the operating lease concept to accommodate these jet liners that dominated air transport.  How it worked was the bank would buy the plane and lease it to the airline.  Because the bank never controlled or flew the plane, they never placed the asset on their balance sheet, and because the airline never owned the plane, they didn’t place it on their balance sheet either. They simply treated the monthly lease payments as rental expenses on the P&L.   August 2010 Original Lease Accounting Changes In August 2010, FASB and IASB decided to overhaul lease accounting as part of their joint commitment “to insure that investors and other users of financial statements are provided useful, transparent, and complete information about leasing transactions in the financial statements.”  Some say that the current lease accounting standards are broken because it keeps assets off the balance sheet, hidden from investors’ view. The original proposal abolished operating leases and only permitted capital leases where all leases would be recorded on the balance sheet as assets and liabilities. The asset side would reflect the right to use the asset for the leased term, and the liability side would reflect the obligation to make lease payments.   Why Companies Were Freaking Out According to the SEC, the financial impact of the aforementioned lease changes was estimated to add more than $1.3 trillion of operating lease obligations to corporate balance sheets. Many companies in various industries, especially retail, are concerned because the changes are significant and will impact existing leases with no grandfather clause for existing operating leases. Of course, the banks and airlines I mentioned earlier really hate this because neither wants to report the airplane (now costing around $60 M) as an asset. Regular companies were concerned that they would have to report routine short term leases of real estate or equipment as fixed assets, even though they were really just longer term rentals.  One company we spoke to leased roadside billboards, and really did not consider them to be fixed assets in any way. Obviously, these changes would have had a profound and lasting effect on a company’s financial and real estate strategies and significantly impact its financial statements.  Financial statements would show higher depreciation and interest expense with significantly higher total assets and debt. In terms of financial metrics, they’re negatively impacted. It would raise a company’s debt-to-capital ratio to reflect the higher debt compared to equity, it would negatively impact their return-on-assets because now companies will appear more asset intensive, and it will decrease EPS, lowering shareholder ROI. Feb. 2011 Recent Update The comment period on leases closed in December 2010. The FASB and the IASB have met several times since then and published their initial responses to the input they received from the various interested parties.  They are “redeliberating” the principles involved in Lease Accounting.  Some of the issues they are looking at include: The core definition of a lease.  This will articulate principles on what is a lease and what is “not-a-lease.” One theory or supposition is that they might define a lease as the transfer of certain but not all major ownership attributes for a certain period of time.  So a year’s lease of an aircraft might be a “lease,” but a year’s lease of half a floor in an office building would be “not-a-lease.”  The ownership attributes transferred from the core owner to the user are different; the airline must maintain, paint, and do whatever it needs to do on the aircraft. However, the office renter will have strictly limited rights in respect to the rented space. The differences between a lease contract and service contract.  Even if they call them “leases” for the purpose of commercial law, a service contract might not be accounted for as a lease. The accounting to be done by the lessee.  They would define when the bank or landlord would retain the asset on their balance sheet, and perhaps by implication, when the lessor would not need to include the asset on theirs.  So if the finance house keeps the airplane or office on their balance sheet, the tenant doesn’t need to.  I’m not sure that I can draw the opposite conclusion where the finance house doesn’t report but the tenant must. The difference, if any, between a financing lease and other leases, and the implications to the accounting. The present value calculation when renewable terms exist. They have reduced the circumstances in which one must look at the renewable terms of a lease in calculating the present value.  In most circumstances, you will use the lease term rather than the potential renewable term. Their latest discussion this past week with the contents of the discussion was not available at the time of me writing this entry.  For more details, the results of the discussions are posted on both the FASB and the IASB websites. Implied Software Changes Whatever the final rules turn out to be, all ERP systems, such as Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards, and Oracle Hyperion will need to change their software to accommodate the new rules. The following lists some changes that might have to be made to accounting software depending on what the final standards will be in June 2011: Lease tracking may require modifications with tracking of additional lease details that might require a centralized repository to maintain Accounting may need to be modified as there are many changes to how capital leases and the new “other than finance” leases are accounted for both on the lessee and lessor side.  For example, valuation, amortization, and disclosure will be considerably different requiring different types of data to be captured. Companies may need to modify their chart of accounts depending on how they want to track leases, which could then impact financial reporting and consolidation Business processes may require changes which could then impact internal controls Software applications may need to perform more advanced computations on leases Reports and KPIs may need to reflect new operating metrics Hold Onto Your Seats           Before you redo all your lease agreements and call your software vendors asking when the changes to the software will be made, remember that the rules are not finalized yet, and from appearances, will not reflect the proposals in the exposure draft.  Not only are there objections to putting the operating lease assets on anyone’s balance sheet, there are lots of objections to subjectivity and the data required for the valuation.  According to Seamus, there is huge opposition from New York bankers, the airlines, the EU, the Communist Party of China (since it impacts their exporting business), and Republicans (hearing complaints from small and large businesses). Even if everyone can agree on the proposed changes, 2013 might be the earliest that companies would need to change how they report leases. The Boards will finish their deliberations in April, May or June 2011.  As we’ve seen with other Exposure Drafts, if the changes are minor and the principles met the General Acceptance consensus criteria, the Standard could be finalized at that time.  However, if substantial changes are made, a fresh exposure draft, comment period, and review period might be involved, too. Seamus added an interesting perspective. Even if the proposed changes do pass, don’t you think our customers, such as Boeing, GE Capital, United Airlines, etc. will be clever enough to come up with a new kind of financing arrangement that complies with the new accounting? How about the large retail customers, such as Best Buy and Macerich? Don’t you think they might simply cut deals around retail locations with new contracts that prevent their leases from being capital leases? Instead of blindly adapting the software to meet the principles outlined in the final standard, our software needs to accommodate how businesses will respond to the new rules. We cannot know our customers’ responses until the rules are finalized. Oracle is aware of the potential changes and is staying abreast of the developments through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to worldwide regulatory compliance. Oracle products have been IFRS and GAAP compliant for years and we will continue to maintain those standards going forward.

    Read the article

  • Adding interactions to admin pages generated by the admin generator

    - by Stick it to THE MAN
    I am using Symfony 1.2.9 (with Propel ORM) to create a website. I have started using the admin generator to implement the admin functionality. I have come accross a slight 'problem' however. My models are related (e.g. one table may have several 1:N relations and N:N relations). I have not found a way to address this satisfactorily yet. As a tactical solution (for list views), I have decided to simply show the parent object, and then add interactions to show the related objects. I'll use a Blog model to illustrate this. Here are the relationships for a blog model: N:M relationship with Blogroll (models a blog roll) 1:N relationship with Blogpost (models a post submitted to a blog) I had originally intended on displaying the (paged) blogpost list for a blog,, when it was selected, using AJAX, but I am struggling enough with the admin generator as it is, so I have shelved that idea - unless someone is kind enough to shed some light on how to do this. Instead, what I am now doing (as a tactical/interim soln), is I have added interactions to the list view which allow a user to: View a list of the blog roll for the blog on that row View a list of the posts for the blog on that row Add a post for the blog on tha row In all of the above, I have written actions that will basically forward the request to the approriate action (admin generated). However, I need to pass some parameters (like the blog id etc), so that the correct blog roll or blog post list etc is returned. I am sure there is a better way of doing what I want to do, but in case there isn't here are my questions: How may I obtain the object that relates to a specific row (of the clicked link) in the list view (e.g. the blog object in this example) Once I have the object, I may choose to extract various fields: id etc. How can I pass these arguments to the admin generated action ? Regarding the second question, my guess is that this may be the way to do it (I may be wrong) public function executeMyAddedBlogRollInteractionLink(sfWebRequest $request) { // get the object *somehow* (I'm guessing this may work) $object = $this->getRoute()->getObject(); // retrieve the required parameters from the object, and build a query string $query_str=$object->getId(); //forward the request to the generated code (action to display blogroll list in this case) $this->forward('backendmodulename',"getblogrolllistaction?params=$query_string"); } This feels like a bit of a hack, but I'm not sure how else to go about it. I'm also not to keen on sending params (which may include user_id etc via a GET, even a POST is not that much safer, since it is fairly sraightforward to see what requests a browser is making). if there is a better way than what I suggest above to implement this kind of administration that is required for objects with 1 or more M:N relationships, I will be very glad to hear the "recommended" way of going about it. I remember reading about marking certain actions as internal. i.e. callable from only within the app. I wonder if that would be useful in this instance?

    Read the article

  • Using one LINQ statement with different parameters

    - by Brettski
    I have a pretty complex linq statement I need to access for different methods. Each of these methods may need to see the resulting data with different parameters. For one method it may be a project code, for another it may be language. The statement is pretty much the same it's just the where part which changes. I have not been able to figure out how to use different where statements without duplicating the entire linq statement, and that just isn't dry enough for me. For example (greatly simplified): var r = from c in customer where c.name == "some name" // or it may be var r = from c in customer where c.customerId == 8 Is there a way to have both of these in the same statement so I can use one or the other based on what I am doing? I tried an if statement to use one of the where statements or the other, and that didn't go over very well.

    Read the article

  • Do we affect multiple users in ASP.NET when we set the Thread CurrentCulture/CurentUICulture?

    - by Nikolay
    When we set the CurrentCulture and/or CurrentUICulture we do this on the current thread like this: Thread.CurrentThread.CurrentCulture = new CultureInfo("en-GB"); Thread.CurrentThread.CurrentUICulture = new CultureInfo("en-GB"); Doest this mean we could affect the culture settings of multiple users of our web application as their requests may reuse the threads from pool? I am working on an ASP.NET MVC application where each user may have own culture setting specified in his/her account data. When the user logs in, the culture setting is retrieved from the database and has to be set as current culture. My worry is that setting the current culture on the current thread may affect another user request reusing this thread. I am even more concerned reading this: ASP.NET not only uses a thread pool, but may switch threads during request processing.

    Read the article

  • Does anyone know of Telerik deals?

    - by Seagull
    This may be the wrong forum to ask... But since there are so many Telerik fans here, I was wondering if anyone is aware of discount coupons or upcoming deals the company may have? I ask because VS2010 is coming, and I suspect they may therefore have deals released at a similar timeframe. I am a bit 'cheap' because I am just starting a one-man business...

    Read the article

  • ActiveRecord Logic Challenge - Smart Ways to Use AR Timestamp

    - by keruilin
    My question is somewhat specific to my app's issue, but the answer should be instructive in terms of use cases for association logic and the record timestamp. I have an NBA pick 'em game where I want to award badges for picking x number of games in a row correctly -- 10, 20, 30. Here are the models, attributes, and associations in-play: User id Pick id result # (values can be 'W', 'L', 'T', or nil. nil means hasn't resolved yet.) resolved # (values can be true, false, or nil.) game_time created_at *Note: There are cases where a pick's result field and resolved field will always be nil. Perhaps the game was cancelled. Badge id Award id user_id badge_id created_at User has many awards. User has many picks. Pick belongs to user. Badge has many awards. Award belongs to user. Award belongs to badge. One of the important rules here to capture in the code is that while a user can be awarded multiple streak badges (e.g., a user can win multiple 10-streak badges), the user CAN'T be awarded another badge for consecutive winning picks that were previously granted an award badge. One way to think of this is that all the dates of the winning picks must come after the date that the streak badge was awarded. For example, let's pretend that a user made 13 winning picks from May 5 to May 8, with the 10th winning pick occurring on May 7, and the last 3 on May 8. The user would be awarded a 10-streak badge on May 7. Now if the user makes another winning pick on May 9, the code must recognize that the user only has a streak of 4 winning picks, not 14, because the user already received an award for the first 10. Now let's assume that the user makes 6 more winning picks. In this case, the code must recognize that all winning picks since May 5 are eligible for a 20-streak badge award, and make the award. Another important rule is that when looking at a winning streak, we don't care about the game time, but rather when the pick was made (created_at). For example, let's say that Team A plays Team B on Sat. And Team C plays Team D on Sun. If the user picks Team C to beat Team D on Thurs, and Team A to beat Team C on Fri, and Team A wins on Sat, but Team C loses on Sun, then the user has a losing streak of 1. So when must the streak-check kick-in? As soon as a pick is a win. If it's a loss or tie, no point in checking. One more note: if the pick is not resolved (false) and the result is nil, that means the game was postponed and must be factored out. With all that said, what is the most efficient, effective and lean way to determine whether a user has a 10-, 20- or 30-win streak?

    Read the article

  • time difference on heroku server

    - by railsnew
    There seems to be a time difference on heroku server. >> Customer.last.id => 584 >> Customer.last.created_at => Thu, 06 May 2010 01:43:20 UTC +00:00 >> Time.zone => #<ActiveSupport::TimeZone:0x2b1dec47e5c0 @utc_offset=0, @tzinfo=#<TZInfo::DataTimezone: Etc/UTC>, @name="UTC"> >> Time.now => Wed May 05 19:05:15 -0700 2010 >> Time.now.zone => "PDT" Notice that current time is May 05 19...however, created_at date for last record is May 06 01:43. This does not make any sense. What can be causing this and how would I go about fixing this?

    Read the article

  • PHP/MYSQL Year Month table for news archive

    - by ee12csvt
    Hi all, I am creating a news archive for my site and want to create an overview page from the following DB table id - Unique identifier newsDate - in a format XXXX-XX-XX title - News Item title details - News item photo - News Item Photo caption - News Item Photo caption update - Timestamp for record The news on the site is current but I hope to add some data from years gone by over the next few months and years. What I want to do is create a new line for each year and highlight the month which corresponds to a record in the DB table, similar to that below. 2002 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC 2004 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC 2005 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC 2008 JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC Any help or advice would be much appreciated Cheers

    Read the article

  • ADO.NET Multiple simultaneous reads from an open database.

    - by Deverill
    Answer not needed - my logic was wrong and this question is invalid. Charles helped me see where I went off-tracks. Thanks I have a utility that moves data from one source to another. In the process of writing the record I check to see if it exists and do an update/insert as necessary. The difficulty I have is that as I'm writing the main record info there is a 2nd table for "custom data" that I have to check to see if it exists and do an update/insert for that as well. Example: I may be loading a pencil sharpener that may or may not exist. While I'm writing it into destination it has characteristics such as style, color, etc. and each of them may or may not exist. As written I seem to need to have 2 DataReaders open, one for the sharpener and one to check for and update color. I am new to ADO.NET, but not to programming and it's more complicated than I listed but for sanity's sake I didn't put all the details. My question is: What am I missing? You can't have 2 readers open at the same time on a connection, yet I can't close the first if I'm stepping through all products. It seems inefficient to have 2 connections, readers, etc. for this. Is there a feature of ADO.NET DBs that I'm missing? How would you do it? Thanks!

    Read the article

  • How to model a mutually exclusive relationship in sql server

    - by littlechris
    Hi, I have to add functionality to an existing application and I've run into a data situation that I'm not sure how to model. I am being restricted to the creation of new tables and code. If I need to alter the existing structure I think my client may reject the proposal..although if its the only way to get it right this is what I will have to do. I have an Item table that can me link to any number of tables, and these tables may increase over time. The Item can only me linked to one other table, but the record in the other table may have many items linked to it. Examples of the tables/entities being linked to are "Person", "Vehicle", "Building", "Office". These are all separate tables. Example of Items are "Pen", "Stapler", "Cushion", "Tyre", "A4 Paper", "Plastic Bag", "Poster", "Decoration" For instance a "Poster" may be allocated to a "Person" or "Office" or "Building". In the future if they add a "Conference Room" table it may also be added to that. My intital thoughts are: Item { ID, Name } LinkedItem { ItemID, LinkedToTableName, LinkedToID } The LinkedToTableName field will then allow me to identify the correct table to link to in my code. I'm not overly happy with this solution, but I can't quite think of anything else. Please help! :) Thanks!

    Read the article

  • read the currency format

    - by user485783
    perhaps for some people this is easy, but I want to learn There are 2 currency formats: first currency format is 1,123,123.12 and this format may be like $1,123,123.12 or 1,123,123.12€ and second currency format is 1.123.123,12 and this may be such $1.123.123,12 or 1.123.123,12€ so the difference is the placement of dots and commas The above format will be $this->value('one of the currency format insert here'); e.g. $this->value('$1,123,123.12'); or $this->value('1.123.123,12€'); that I want to know is the code if (first currency format) {use blah .. blah ..} elseif (second currency format) {use blah .. blah ..} else {/ / unsupported format} so how the code to identify whether the entry is input the first currency format or second currency format? thanks for the your pointers and ideas. UPDATED: I apologize for mistake in giving examples When I tried to test the code I have a little confused because it seems not working then I changed my prevoious parts, so that $value['amount'] it may be use first currency format 1,123,123.12 and this format may be like $1,123,123.12 or 1,123,123.12€ and second currency format 1.123.123,12 and this may be such $1.123.123,12 or 1.123.123,12€ then the value['amount'] will identify first with code like the following conditional class curr_format { private bla...bla..1 private bla...bla..2 var etc.. public function curr_format ($bla...,$and_bla..) { //then make conditional is here if (first currency format) {//use blah .. blah ..} elseif (second currency format) {//use blah .. blah ..} else {/ / unsupported format} //another codes.. at the end output as look like: $identify = new curr_format(); echo $identify->curr_format($value['amount'],$else_statement);

    Read the article

  • tools for testing vim plugins

    - by intuited
    I'm looking for some tools for testing vim scripts. Either vim scripts that do unit/functional testing, or classes for some other library (eg Python's unittest module) that make it convenient to run vim with parameters that cause it to do some tests on its environment, and determine from the output whether or not a given test passed. I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful: vim-unit: purports "To provide vim scripts with a simple unit testing framework and tools" first and only version (v0.1) was released in 2004 documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished". unit-test.vim: This one also seems pretty experimental, and may not be particularly reliable. May have been abandoned or back-shelved: last commit was in 2009-11 ( 6 months ago) No tagged revisions have been created (ie no releases) So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

    Read the article

  • How to Format time in Yahoo PIpes to get the minutes past..??

    - by Nok Imchen
    I fetched the tweets from twitter api using yahoo pipes. Now, twitter gives the time in this format "Thu May 27 19:56:21 +0000 2010". I want to know how many minutes have past since "Thu May 27 19:56:21 +0000 2010". How, can i format "Thu May 27 19:56:21 +0000 2010" in order to get the minutes past?? in other words, i want to subract "now" and "Thu May 27 19:56:21 +0000 2010". Then convert the result into minutes. Please help me...

    Read the article

  • How can i run code on the client side from a browser?

    - by acidzombie24
    With LLVM and silverlight this may be possible now (or it may be possible with flash). I like the user to select a file and then do the following things 1) Hash it with md5 and sha1 2) If archive check if an exe is in it 3) If archive check if password protected The first to see if the user has uploaded it already (today, yesterday, last month) 2nd to prevent viruses 3rd i should be fine without but if i decide to not allow protected archives i can warn before the user uploads it. How may i do this through the browser?

    Read the article

  • Simple regular expression for decimal numbers?

    - by finch
    I know this may be the simplest question ever asked on Stack Overflow, but what is the regular expression for a decimal with a precision of 2? Valid examples: 123.12 2 56754 92929292929292.12 0.21 3.1 Invalid examples: 12.1232 2.23332 e666.76 Sorry for the lame question, but for the life of me I haven't been able to find anyone that can help! The decimal place may be option, and that integers may also be included.

    Read the article

  • access dropdown control from the master page to the content page using asp.net

    - by Isha Jain
    i have a master page and the content pages in the master page i have the textbox and dropdown the value in the dropdown may vary according to the content pages e.g for one content page the dropdown may contain branchname, city, address and let for other content page under same master page the dropdown may have values like Contactnumber, EmailID, ........... ......... etc..... so please help me to how can i bind that dropdown from my content page thanks.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >