Search Results

Search found 68614 results on 2745 pages for 'full set arguments'.

Page 303/2745 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Fast check if an object will be successfully instantiated in PHP?

    - by Gremo
    How can I check if an object will be successfully instantiated with the given argument, without actually creating the instance? Actually I'm only checking (didn't tested this code, but should work fine...) the number of required parameters, ignoring types: // Filter definition and arguments as per configuration $filter = $container->getDefinition($serviceId); $args = $activeFilters[$filterName]; // Check number of required arguments vs arguments in config $constructor = $reflector->getConstructor(); $numRequired = $constructor->getNumberOfRequiredParameters(); $numSpecified = is_array($args) ? count($args) : 1; if($numRequired < $numSpecified) { throw new InvalidFilterDefinitionException( $serviceId, $numRequired, $numSpecified ); }

    Read the article

  • Entity framework 4 many-to-many insertion?

    - by Saxman
    Hi all, I'm not very familiar with the many-to-many insertion process using Entity Framework 4, POCO. I have a blog with 3 tables: Post, Comment, and Tag. A Post can have many Tags and a Tag can be in many Posts. Here are the Post and Tag models: public class Tag { public int Id { get; set; } [Required] [StringLength(25, ErrorMessage = "Tag name can't exceed 25 characters.")] public string Name { get; set; } public virtual ICollection<Post> Posts { get; set; } } public class Post { public int Id { get; set; } [Required] [StringLength(512, ErrorMessage = "Title can't exceed 512 characters")] public string Title { get; set; } [Required] [AllowHtml] public string Content { get; set; } public string FriendlyUrl { get; set; } public DateTime PostedDate { get; set; } public bool IsActive { get; set; } public virtual ICollection<Comment> Comments { get; set; } public virtual ICollection<Tag> Tags { get; set; } } Now when I'm adding a new post, I'm not sure what would be the right way to do. I'm thinking that I'll have a textbox where I can select multiple tags for that post (this part is already done), in my controller, I will check to see if the tag is already exists or not, if not, then I will insert the new tag. But I'm not even sure based on the models that I've created for EF, will they create a PostsTags table, or they are creating just a Tags and a Posts table and links between the two? How would I insert the new Post and set the tags to that post? Is it just newPost.Tags = Tags (where Tags are the one that got selected, do I even need to check to see if they already exists?), and then something like _post.Add(newPost);? Thanks.

    Read the article

  • Null reference but it's not?

    - by Clint Davis
    This is related to my previous question but it is a different problem. I have two classes: Server and Database. Public Class Server Private _name As String Public Property Name() As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Private _databases As List(Of Database) Public Property Databases() As List(Of Database) Get Return _databases End Get Set(ByVal value As List(Of Database)) _databases = value End Set End Property Public Sub LoadTables() Dim db As New Database(Me) db.Name = "test" Databases.Add(db) End Sub End Class Public Class Database Private _server As Server Private _name As String Public Property Name() As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Public Property Server() As Server Get Return _server End Get Set(ByVal value As Server) _server = value End Set End Property Public Sub New(ByVal ser As Server) Server = ser End Sub End Class Fairly simple. I use like this: Dim s As New Server s.Name = "Test" s.LoadTables() The problem is in the LoadTables in the Server class. When it hits Databases.Add(db) it gives me a NullReference error (Object reference not set). I don't understand how it is getting that, all the objects are set. Any ideas? Thanks.

    Read the article

  • capistrano initial deployment

    - by Richard G
    I'm trying to set up Capistrano to deploy to an AWS box. This is the first time I've tried to set this up, so please bear with me. Could someone take a look at this and let me know if you can solve this error? The output below is the deploy.rb file, and it's output when it runs. set :application, "apparel1" set :repository, "git://github.com/rgilling/GroceryRun.git" set :scm, :git set :user, "ubuntu" set :scm_passphrase, "pre5ence" # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` ssh_options[:keys] = ["/Users/rgilling/Documents/Projects/Apparel1/abesakey.pem"] ssh_options[:forward_agent] = true set :location, "ec2-107-22-27-42.compute-1.amazonaws.com" role :web, location # Your HTTP server, Apache/etc role :app, location # This may be the same as your `Web` server role :db, location, :primary => true # This is where Rails migrations will run set :deploy_to, "/var/www/#{application}" set :deploy_via, :remote_cache set :use_sudo, true # if you want to clean up old releases on each deploy uncomment this: # after "deploy:restart", "deploy:cleanup" # if you're still using the script/reaper helper you will need # these http://github.com/rails/irs_process_scripts # If you are using Passenger mod_rails uncomment this: namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles => :app, :except => { :no_release => true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Then the execution results in this permission error. I think I"ve set up the SSH etc. correctly... updating the cached checkout on all servers executing locally: "git ls-remote git://github.com/rgilling/GroceryRun.git HEAD" command finished in 1294ms * executing "if [ -d /var/www/apparel1/shared/cached-copy ]; then cd /var/www/apparel1/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard f35dc5868b52649eea86816d536d5db8c915856e && git clean -q -d -x -f; else git clone -q git://github.com/rgilling/GroceryRun.git /var/www/apparel1/shared/cached-copy && cd /var/www/apparel1/shared/cached-copy && git checkout -q -b deploy f35dc5868b52649eea86816d536d5db8c915856e; fi" servers: ["ec2-107-22-27-42.compute-1.amazonaws.com"] [ec2-107-22-27-42.compute-1.amazonaws.com] executing command ** **[ec2-107-22-27-42.compute-1.amazonaws.com :: err] error: cannot open .git/FETCH_HEAD: Permission denied**

    Read the article

  • How to show tab character while using expandtab setting?

    - by Plugawy
    In my .gvimrc I have following lines: set listchars=tab:\.\ ,trail:- set softtabstop=2 set shiftwidth=2 set tabstop=2 set expandtab When I change last line to set noexpandtab the indents can be seen and marked with . Is there a way to make vim treat expanded tabs like "normal" tab so that list option works as expected?

    Read the article

  • C++: Trouble with dependent types in templates

    - by Rosarch
    I'm having trouble with templates and dependent types: namespace Utils { void PrintLine(const string& line, int tabLevel = 0); string getTabs(int tabLevel); template<class result_t, class Predicate> set<result_t> findAll_if(typename set<result_t>::iterator begin, set<result_t>::iterator end, Predicate pred) // warning C4346 { set<result_t> result; return findAll_if_rec(begin, end, pred, result); } } namespace detail { template<class result_t, class Predicate> set<result_t> findAll_if_rec(set<result_t>::iterator begin, set<result_t>::iterator end, Predicate pred, set<result_t> result) { typename set<result_t>::iterator nextResultElem = find_if(begin, end, pred); if (nextResultElem == end) { return result; } result.add(*nextResultElem); return findAll_if_rec(++nextResultElem, end, pred, result); } } Compiler complaints, from the location noted above: warning C4346: 'std::set<result_t>::iterator' : dependent name is not a type. prefix with 'typename' to indicate a type error C2061: syntax error : identifier 'iterator' What am I doing wrong?

    Read the article

  • Getting req.params in order in Express JS

    - by Adam Terlson
    In Express, is there a way to get the arguments passed from the matching route in the order they are defined in the route? I want to be able to apply all the params from the route to another function. The catch is that those parameters are not known up front, so I cannot refer to each parameter by name explicitly. app.get(':first/:second/:third', function (req) { output.apply(this, req.mysteryOrderedArrayOfParams); // Does this exist? }); function output() { for(var i = 0; i < arguments.length; i++) { console.log(arguments[i]); } } Call on GET: "/foo/bar/baz" Desired Output (in this order): foo bar baz

    Read the article

  • Extract substructure from a text file using bash or python

    - by Werner
    Hi, I have a huge text file, which follows the structure: SET TAG1 ... ... SET ... SET TAG2 ... ... SET ... ... I would like to extract for a specific TAG, (i.e. TAG54) its individual "substructure", which would be SET TAG54 ... ... SET Each substructure, for a given TAG_i contains always: first line:SET second line:TAG_i (in this case TAG54) an arbitrary number of lines last line:SET I wonder what would be the best way to do this, whether in bash or python, so for a given TAG, one can "extract" this substructure. Thanks

    Read the article

  • func_get_args detect context

    - by Steve
    I have a script where it accepts a varying number of arguments. I want to use func_get_args to perform operations on said arguments. If I have one function like this: function Something() { foreach(func_get_args($this) as $functions) { // Do something } // Return.. } I want to be able to call this function in, for example, another function to add/save entries. The add/save function would have arguments 'title', 'description' etc.. I basically want to know if there is a way to detect the context of a function call. Can I pass something to func_get_args that will let it know that its called in a certain function? So if I do: function Save($title, $desc) { $vars = $this->Something(); } I want $vars to contain $title and $desc after modifying them.

    Read the article

  • Error in SQL. Can not find it.

    - by kmsboy
    Error in SQL. Can not find it. DECLARE @year VARCHAR (4), @month VARCHAR (2), @day VARCHAR (2), @weekday VARCHAR (2), @hour VARCHAR (2), @archivePath VARCHAR (128), @archiveName VARCHAR (128), @archiveFullName VARCHAR (128) SET @year = CAST(DATEPART(yyyy, GETDATE()) AS VARCHAR) SET @month = CAST(DATEPART(mm, GETDATE()) AS VARCHAR) SET @day = CAST(DATEPART(dd, GETDATE()) AS VARCHAR) SET @weekday = CAST(DATEPART (dw, GETDATE()) AS VARCHAR) SET @hour = CAST(DATEPART (hh, GETDATE()) AS VARCHAR) SET @archivePath = 'd:\1c_new\backupdb\' SET @archiveName = 'TransactionLog_' + @year + '_' + @month + '_' + @day + '_' + @hour + '.bak' SET @archiveFullName = @archivePath + @archiveName BACKUP LOG [xxx] TO DISK = @archiveFullName WITH INIT , NOUNLOAD , NAME = N'?????????? ??? ??????????', SKIP , STATS = 10, DESCRIPTION = N'?????????? ??? ??????????', NOFORMAT

    Read the article

  • mysqldb python escaping ? or %s?

    - by asldkncvas
    Dear Everyone, I am currently using mysqldb. What is the correct way to escape strings in mysqldb arguments? Note that E = lambda x: x.encode('utf-8') 1) so my connection is set with charset='utf8'. These are the errors I am getting for these arguments: w1, w2 = u'??', u'??' 1) self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2))) ret = self.cur.execute("SELECT dist FROM distance WHERE w1=? AND w2=?", (E(w1), E(w2)) ) File "build/bdist.linux-i686/egg/MySQLdb/cursors.py", line 158, in execute TypeError: not all arguments converted during string formatting 2) self.cur.execute("SELECT dist FROM distance WHERE w1=%s AND w2=%s", (E(w1), E(w2))) This works fine, but when w1 or w2 has \ inside, then the escaping obviously failed. I personally know that %s is not a good method to pass in arguemnts due to injection attacks etc.

    Read the article

  • SmartAssembly Support: How to change the maps folder

    - by Bart Read
    If you've set up SmartAssembly to store error reports in a SQL Server database, you'll also have specified a folder for the map files that are used to de-obfuscate error reports (see Figure 1). Whilst you can change the database easily enough you can't change the map folder path via the UI - if you click on it, it'll just open the folder in Explorer - but never fear, you can change it manually and fortunately it's not that difficult. (If you want to get to these settings click the Tools > Options link on the left-hand side of the SmartAssembly main window.)   Figure 1. Error reports database settings in SmartAssembly. The folder path is actually stored in the database, so you just need to open up SQL Server Management Studio, connect to the SQL Server where your error reports database is stored, then open a new query on the SmartAssembly database by right-clicking on it in the Object Explorer, then clicking New Query (see figure 2).     Figure 2. Opening a new query against the SmartAssembly error reports database in SQL Server. Now execute the following SQL query in the new query window: SELECT * FROM dbo.Information You should find that you get a result set rather like that shown in figure 3. You can see that the map folder path is stored in the MapFolderNetworkPath column.   Figure 3. Contents of the dbo.Information table, showing the map folder path I set in SmartAssembly. All I need to do to change this is execute the following SQL: UPDATE dbo.Information SET MapFolderNetworkPath = '\\UNCPATHTONEWFOLDER' WHERE MapFolderNetworkPath = '\\dev-ltbart\SAMaps' This will change the map folder path to whatever I supply in the SET clause. Once you've done this, you can verify the change by executing the following again: SELECT * FROM dbo.Information You should find the result set contains the new path you've set.

    Read the article

  • Additional options in MDL

    - by Jane Zhang
        The Metadata Loader(MDL) enables you to populate a new repository as well as transfer, update, or restore a backup of existing repository metadata. It consists of two utilities: metadata export and metadata import. The export utility extracts metadata objects from a repository and writes the information into a file. The import utility reads the metadata information from an exported file and inserts the metadata objects into a repository.      While the Design Client provides an intuitive UI that helps you perform the most commonly used export and import tasks, OMBPlus scripting enables you to specify some additional options, and manage a control file that allows you to perform more specialized export and import tasks. Is it possible to utilize these options in MDL from Design Client? This article will tell you how to achieve it.      A property file named mdl.properties is used to configure the additional options. It stores options in name/value pairs. This file can be created and placed under the directory <owb installation path>/owb/bin/admin/. Below we will introduce the options that can be specified in the mdl.properties file. 1. DEFAULTDIRECTORY     When we open a Metadata Export/Import dialog in Design Client, a default directory is provided for MDL file and log file. For MDL Export, the default directory is <owb installation path>/owb/bin/. As for MDL Import, the default directory is <owb installation path>/owb/mdl/. It may not be the one you would want to use as a default. You can specify the option DEFAULTDIRECTORY in the mdl.properties file to set your own default directory for MDL Export/Import, for example, DEFAULTDIRECOTRY=/tmp/     In this example, the default directory is set to /tmp/. Be sure the value ends with a file separator since it represents a directory. In Windows, the file separator is “\”. In linux, the file separator is “/”. 2. MDLTRACEFILE     Sometimes we would like to trace the whole process of MDL Export/Import, and get detailed information about operations to help developers or supports troubleshooting. To turn on MDL trace, set the option MDLTRACEFILE in the mdl.properties file. MDLTRACEFILE=/tmp/mdl.trc    The right side of the equals sign is to specify the name of the file for MDL trace information to be written. If no path is specified, the file will be placed under directory <owb installation path>/owb/bin/admin/. However, the trace file may be large if the MDL file contains a large number of metadata objects, so please use this option sparingly. 3. CONTROLFILE       We can use a control file to specify how objects are imported or exported. We can set an option called CONTROLFILE in the mdl.properties file, so the control file can also be utilized in Design Client, for example, CONTROLFILE=/tmp/mdl_control_file.ctl     The control file stores options in name/value pairs. When using control file, be sure the file exists, otherwise an exception java.lang.Exception: CNV0002-0031(ERROR): Cannot find specified file will be thrown out during MDL Export/Import.      Next we will introduce some options specified in control file. ZIPFILEFORMAT     By default, MDL exports objects into a zip format file. This zip file has an .mdl extension and contains two files. For example, you export the repository metadata into a file called projects.mdl. When you unzip this MDL file, you obtain two files. The file projects.mdx contains the repository objects. The file mdlcatalog.xml contains internal information about the MDL XML file. Another choice is to combine these two files into one unzip text format file when doing MDL exporting.    In OMBPlus command related to MDL, there is an option called FILE_FORMAT which is used to specify the file format for the exported file. Its acceptable values are ZIP or TEXT. When the value TEXT is selected, the exported file is in text format, for example, OMBEXPORT MDL_FILE '/tmp/options_file_format_test.mdl' FILE_FORMAT TEXT FROM PROJECT 'MY_PROJECT'    How to achieve this via Design Client when doing an MDL exporting? Here we have another option called ZIPFILEFORMAT which has the same function as the FILE_FORMAT. The difference is the acceptable values for ZIPFILEFORMAT are Y or N. When the value is set to N, the exported file is in text format, otherwise it is in zip file format. LOGMESSAGELEVEL     Whenever you export or import repository metadata, MDL writes diagnostic and statistical information to a log file. Their are 3 types of status messages: Informational, Warning and Error. By default, the log file includes all types of message. Sometimes, user may only care about one type of messages, for example, they would like only error messages written to the log file. In order to achieve this, we can set an option called LOGMESSAGELEVEL in control file. The acceptable values for LOGMESSAGELEVEL are ALL, WARNING and ERROR. ALL: If the option LOGMESSAGELEVEL is set to ALL, all types of messages (Informational, Warning and Error) will be written into the log file. WARNING: If the option LOGMESSAGELEVEL is set to WARNING, only warning messages will be written into log file. ERROR: If the option LOGMESSAGELEVEL is set to ERROR, only error messages will be written into log file. UPDATEPROJECTATTRIBUTES, UPDATEMODULEATTRIBUTES      These two options are used to decide whether updating the attributes of projects/modules. The options work when projects/modules being imported already exist in repository and we use update metadata mode or replace metadata mode to do the MDL import. The acceptable values for these two options are Y or N. If the value is set to Y, the attributes of projects/modules will be updated, otherwise not.      Next, let’s give an example to see how these options take effect in MDL. 1. First of all, create the property file mdl.properties under the directory <owb installation path>/owb/bin/admin/. 2. Specify the options in the mdl.properties file, see the following screenshot. 3. Create the control file mdl_control_file.ctl under the directory /tmp/. Set the following options in control file. 4. Log into the OWB Design Client. 5. Create an Oracle module named ORA_MOD_1 under the project MY_PROJECT, then export the project MY_PROJECT into file my_project.mdl. 6. Check the trace file mdl.trc under the directory /tmp/. In this file, we can see very detail information for the above export task. 7. Check the exported MDL file. The file my_project.mdl is in text format. Opening the file, you can see the content of the file directly. It concats the file my_project.mdx and mdlcatalog.xml. 8. Modify the project MY_PROJECT and Oracle module ORA_MOD_1, add descriptions for them separately. Delete the location created in step 5. 9. Import the MDL file my_project.mdl. From the Metadata Import dialog, we can see the default directory for MDL file and log file has been changed to /tmp/. Here we use update metadata mode, match by names to do the importing. 10. After importing, check the description of the project MY_PROJECT, we can see the description is still there. But the description of the Oracle module ORA_MOD_1 has gone. That because we set the option UPDATEPROJECTATTRIBUTES to N, and set the option UPDATEMODULEATTRIBUTES to Y. 11. Check the log file, the log file only contains warning messages and the log message level is set to WARNING.      For more details about the 3 types of status messages, see Oracle® Warehouse Builder Installation and Administration Guide11g Release 2.

    Read the article

  • Scripting with the Sun ZFS Storage 7000 Appliance

    - by Geoff Ongley
    The Sun ZFS Storage 7000 appliance has a user friendly and easy to understand graphical web based interface we call the "BUI" or "Browser User Interface".This interface is very useful for many tasks, but in some cases a script (or workflow) may be more appropriate, such as:Repetitive tasksTasks which work on (or obtain information about) a large number of shares or usersTasks which are triggered by an alert threshold (workflows)Tasks where you want a only very basic input, but a consistent output (workflows)The appliance scripting language is based on ECMAscript 3 (close to javascript). I'm not going to cover ECMAscript 3 in great depth (I'm far from an expert here), but I would like to show you some neat things you can do with the appliance, to get you started based on what I have found from my own playing around.I'm making the assumption you have some sort of programming background, and understand variables, arrays, functions to some extent - but of course if something is not clear, please let me know so I can fix it up or clarify it.Variable Declarations and ArraysVariablesECMAScript is a dynamically and weakly typed language. If you don't know what that means, google is your friend - but at a high level it means we can just declare variables with no specific type and on the fly.For example, I can declare a variable and use it straight away in the middle of my code, for example:projects=list();Which makes projects an array of values that are returned from the list(); function (which is usable in most contexts). With this kind of variable, I can do things like:projects.length (this property on array tells you how many objects are in it, good for for loops etc). Alternatively, I could say:projects=3;and now projects is just a simple number.Should we declare variables like this so loosely? In my opinion, the answer is no - I feel it is a better practice to declare variables you are going to use, before you use them - and given them an initial value. You can do so as follows:var myVariable=0;To demonstrate the ability to just randomly assign and change the type of variables, you can create a simple script at the cli as follows (bold for input):fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("Number of projects is: %d\n",projects.length);("." to run)> projects=152;("." to run)> printf("Value of the projects variable as an integer is now: %d\n",projects);("." to run)> .Number of projects is: 7Value of the projects variable as an integer is now: 152You can also confirm this behaviour by checking the typeof variable we are dealing with:fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> projects=152;("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> .var projects is of type objectvar projects is of type numberArraysSo you likely noticed that we have already touched on arrays, as the list(); (in the shares context) stored an array into the 'projects' variable.But what if you want to declare your own array? Easy! This is very similar to Java and other languages, we just instantiate a brand new "Array" object using the keyword new:var myArray = new Array();will create an array called "myArray".A quick example:fishy10:> script("." to run)> testArray = new Array();("." to run)> testArray[0]="This";("." to run)> testArray[1]="is";("." to run)> testArray[2]="just";("." to run)> testArray[3]="a";("." to run)> testArray[4]="test";("." to run)> for (i=0; i < testArray.length; i++)("." to run)> {("." to run)>    printf("Array element %d is %s\n",i,testArray[i]);("." to run)> }("." to run)> .Array element 0 is ThisArray element 1 is isArray element 2 is justArray element 3 is aArray element 4 is testWorking With LoopsFor LoopFor loops are very similar to those you will see in C, java and several other languages. One of the key differences here is, as you were made aware earlier, we can be a bit more sloppy with our variable declarations.The general way you would likely use a for loop is as follows:for (variable; test-case; modifier for variable){}For example, you may wish to declare a variable i as 0; and a MAX_ITERATIONS variable to determine how many times this loop should repeat:var i=0;var MAX_ITERATIONS=10;And then, use this variable to be tested against some case existing (has i reached MAX_ITERATIONS? - if not, increment i using i++);for (i=0; i < MAX_ITERATIONS; i++){ // some work to do}So lets run something like this on the appliance:fishy10:> script("." to run)> var i=0;("." to run)> var MAX_ITERATIONS=10;("." to run)> for (i=0; i < MAX_ITERATIONS; i++)("." to run)> {("." to run)>    printf("The number is %d\n",i);("." to run)> }("." to run)> .The number is 0The number is 1The number is 2The number is 3The number is 4The number is 5The number is 6The number is 7The number is 8The number is 9While LoopWhile loops again are very similar to other languages, we loop "while" a condition is met. For example:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n",counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Counter is 10Loop has ended and Counter is 11So what do we notice here? Something has actually gone wrong - counter will technically be 11 once the loop completes... Why is this?Well, if we have a loop like this, where the 'while' condition that will end the loop may be set based on some other condition(s) existing (such as the counter has reached 10) - we must ensure that we  terminate this iteration of the loop when the condition is met - otherwise the rest of the code will be followed which may not be desirable. In other words, like in other languages, we will only ever check the loop condition once we are ready to perform the next iteration, so any other code after we set "isTen" to be true, will still be executed as we can see it was above.We can avoid this by adding a break into our loop once we know we have set the condition - this will stop the rest of the logic being processed in this iteration (and as such, counter will not be incremented). So lets try that again:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>            break;("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n", counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Loop has ended and Counter is 10Much better!Methods to Obtain and Manipulate DataGet MethodThe get method allows you to get simple properties from an object, for example a quota from a user. The syntax is fairly simple:var myVariable=get('property');An example of where you may wish to use this, is when you are getting a bunch of information about a user (such as quota information when in a shares context):var users=list();for(k=0; k < users.length; k++){     user=users[k];     run('select ' + user);     var username=get('name');     var usage=get('usage');     var quota=get('quota');...Which you can then use to your advantage - to print or manipulate infomation (you could change a user's information with a set method, based on the information returned from the get method). The set method is explained next.Set MethodThe set method can be used in a simple manner, similar to get. The syntax for set is:set('property','value'); // where value is a string, if it was a number, you don't need quotesFor example, we could set the quota on a share as follows (first observing the initial value):fishy10:shares default/test-geoff> script("." to run)> var currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> set('quota','30G');("." to run)> run('commit');("." to run)> currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> .Current Quota is: 0Current Quota is: 32212254720This shows us using both the get and set methods as can be used in scripts, of course when only setting an individual share, the above is overkill - it would be much easier to set it manually at the cli using 'set quota=3G' and then 'commit'.List MethodThe list method can be very powerful, especially in more complex scripts which iterate over large amounts of data and manipulate it if so desired. The general way you will use list is as follows:var myVar=list();Which will make "myVar" an array, containing all the objects in the relevant context (this could be a list of users, shares, projects, etc). You can then gather or manipulate data very easily.We could list all the shares and mountpoints in a given project for example:fishy10:shares another-project> script("." to run)> var shares=list();("." to run)> for (i=0; i < shares.length; i++)("." to run)> {("." to run)>    run('select ' + shares[i]);("." to run)>    var mountpoint=get('mountpoint');("." to run)>    printf("Share %s discovered, has mountpoint %s\n",shares[i],mountpoint);("." to run)>    run('done');("." to run)> }("." to run)> .Share and-another discovered, has mountpoint /export/another-project/and-anotherShare another-share discovered, has mountpoint /export/another-project/another-shareShare bob discovered, has mountpoint /export/another-projectShare more-shares-for-all discovered, has mountpoint /export/another-project/more-shares-for-allShare yep discovered, has mountpoint /export/another-project/yepWriting More Complex and Re-Usable CodeFunctionsThe best way to be able to write more complex code is to use functions to split up repeatable or reusable sections of your code. This also makes your more complex code easier to read and understand for other programmers.We write functions as follows:function functionName(variable1,variable2,...,variableN){}For example, we could have a function that takes a project name as input, and lists shares for that project (assuming we're already in the 'project' context - context is important!):function getShares(proj){        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                printf("Discovered share: %s\n",shares[i]);        }        run('done'); // exit selected project}Commenting your CodeLike any other language, a large part of making it readable and understandable is to comment it. You can use the same comment style as in C and Java amongst other languages.In other words, sngle line comments use://at the beginning of the comment.Multi line comments use:/*at the beginning, and:*/ at the end.For example, here we will use both:fishy10:> script("." to run)> // This is a test comment("." to run)> printf("doing some work...\n");("." to run)> /* This is a multi-line("." to run)> comment which I will span across("." to run)> three lines in total */("." to run)> printf("doing some more work...\n");("." to run)> .doing some work...doing some more work...Your comments do not have to be on their own, they can begin (particularly with single line comments this is handy) at the end of a statement, for examplevar projects=list(); // The variable projects is an array containing all projects on the system.Try and Catch StatementsYou may be used to using try and catch statements in other languages, and they can (and should) be utilised in your code to catch expected or unexpected error conditions, that you do NOT wish to stop your code from executing (if you do not catch these errors, your script will exit!):try{  // do some work}catch(err) // Catch any error that could occur{ // do something here under the error condition}For example, you may wish to only execute some code if a context can be reached. If you can't perform certain actions under certain circumstances, that may be perfectly acceptable.For example if you want to test a condition that only makes sense when looking at a SMB/NFS share, but does not make sense when you hit an iscsi or FC LUN, you don't want to stop all processing of other shares you may not have covered yet.For example we may wish to obtain quota information on all shares for all users on a share (but this makes no sense for a LUN):function getShareQuota(shar) // Get quota for each user of this share{        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","----");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}Running Scripts Remotely over SSHAs you have likely noticed, writing and running scripts for all but the simplest jobs directly on the appliance is not going to be a lot of fun.There's a couple of choices on what you can do here:Create scripts on a remote system and run them over sshCreate scripts, wrapping them in workflow code, so they are stored on the appliance and can be triggered under certain circumstances (like a threshold being reached)We'll cover the first one here, and then cover workflows later on (as these are for the most part just scripts with some wrapper information around them).Creating a SSH Public/Private SSH Key PairLog on to your handy Solaris box (You wouldn't be using any other OS, right? :P) and use ssh-keygen to create a pair of ssh keys. I'm storing this separate to my normal key:[geoff@lightning ~] ssh-keygen -t rsa -b 1024Generating public/private rsa key pair.Enter file in which to save the key (/export/home/geoff/.ssh/id_rsa): /export/home/geoff/.ssh/nas_key_rsaEnter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /export/home/geoff/.ssh/nas_key_rsa.Your public key has been saved in /export/home/geoff/.ssh/nas_key_rsa.pub.The key fingerprint is:7f:3d:53:f0:2a:5e:8b:2d:94:2a:55:77:66:5c:9b:14 geoff@lightningInstalling the Public Key on the ApplianceOn your Solaris host, observe the public key:[geoff@lightning ~] cat .ssh/nas_key_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc= geoff@lightningNow, copy and paste everything after "ssh-rsa" and before "user@hostname" - in this case, geoff@lightning. That is, this bit:AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc=Logon to your appliance and get into the preferences -> keys area for this user (root):[geoff@lightning ~] ssh [email protected]: Last login: Mon Dec  6 17:13:28 2010 from 192.168.0.2fishy10:> configuration usersfishy10:configuration users> select rootfishy10:configuration users root> preferences fishy10:configuration users root preferences> keysOR do it all in one hit:fishy10:> configuration users select root preferences keysNow, we create a new public key that will be accepted for this user and set the type to RSA:fishy10:configuration users root preferences keys> createfishy10:configuration users root preferences key (uncommitted)> set type=RSASet the key itself using the string copied previously (between ssh-rsa and user@host), and set the key ensuring you put double quotes around it (eg. set key="<key>"):fishy10:configuration users root preferences key (uncommitted)> set key="AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc="Now set the comment for this key (do not use spaces):fishy10:configuration users root preferences key (uncommitted)> set comment="LightningRSAKey" Commit the new key:fishy10:configuration users root preferences key (uncommitted)> commitVerify the key is there:fishy10:configuration users root preferences keys> lsKeys:NAME     MODIFIED              TYPE   COMMENT                                  key-000  2010-10-25 20:56:42   RSA    cycloneRSAKey                           key-001  2010-12-6 17:44:53    RSA    LightningRSAKey                         As you can see, we now have my new key, and a previous key I have created on this appliance.Running your Script over SSH from a Remote SystemHere I have created a basic test script, and saved it as test.ecma3:[geoff@lightning ~] cat test.ecma3 script// This is a test script, By Geoff Ongley 2010.printf("Testing script remotely over ssh\n");.Now, we can run this script remotely with our keyless login:[geoff@lightning ~] ssh -i .ssh/nas_key_rsa root@fishy10 < test.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Testing script remotely over sshPutting it Together - An Example Completed Quota Gathering ScriptSo now we have a lot of the basics to creating a script, let us do something useful, like, find out how much every user is using, on every share on the system (you will recognise some of the code from my previous examples): script/************************************** Quick and Dirty Quota Check script ** Written By Geoff Ongley            ** 25 October 2010                    **************************************/function getUserQuota(usr){        run('select ' + usr);        var username=get('name');        var usage=get('usage');        var quota=get('quota');        var usage_g=usage / 1073741824; // convert bytes to gigabytes        var quota_g=quota / 1073741824; // as above        var quota_percent=0        if (quota > 0)        {                quota_percent=(usage / quota)*(100/1);        }        printf("    %20s        %8.2f           %8.2f           %d%%\n",username,usage_g,quota_g,quota_percent);        run('done'); // done with this selected user}function getShareQuota(shar){        //printf("DEBUG: selecting share %s\n", shar);        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","--------");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}function getShares(proj){        //printf("DEBUG: selecting project %s\n",proj);        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                share=shares[j];                getShareQuota(share);        }        run('done'); // exit selected project}function getProjects(){        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                var project=projects[i];                getShares(project);        }        run('done'); // exit context for all projects}getProjects();.Which can be run as follows, and will print information like this:[geoff@lightning ~/FISHWORKS_SCRIPTS] ssh -i ~/.ssh/nas_key_rsa root@fishy10 < get_quota_utilisation.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Project: another-project  SHARE: and-another                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                 geoffro            0.05            0.00        0%                   Billy            0.10            0.00        0%                    root            0.00            0.00        0%            testing-user            0.05            0.00        0%  SHARE: another-share                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                 geoffro            0.05            0.49        9%            testing-user            0.05            0.02        249%                   Billy            0.10            0.29        33%  SHARE: bob                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%  SHARE: more-shares-for-all                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                   Billy            0.10            0.00        0%            testing-user            0.05            0.00        0%                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%                 geoffro            0.05            0.00        0%  SHARE: yep                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                   Billy            0.10            0.01        999%            testing-user            0.05            0.49        9%                 geoffro            0.05            0.00        0%Project: default  SHARE: Test-LUN    SKIPPING Test-LUN - This is NOT a NFS or CIFs share, not looking for users  SHARE: test-geoff                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                 geoffro            0.05            0.00        0%                    root            3.18           10.00        31%                    uucp            0.00            0.00        0%                  nobody            0.59            0.49        119%^CKilled by signal 2.Creating a WorkflowWorkflows are scripts that we store on the appliance, and can have the script execute either on request (even from the BUI), or on an event such as a threshold being met.Workflow BasicsA workflow allows you to create a simple process that can be executed either via the BUI interface interactively, or by an alert being raised (for some threshold being reached, for example).The basics parameters you will have to set for your "workflow object" (notice you're creating a variable, that embodies ECMAScript) are as follows (parameters is optional):name: A name for this workflowdescription: A Description for the workflowparameters: A set of input parameters (useful when you need user input to execute the workflow)execute: The code, the script itself to execute, which will be function (parameters)With parameters, you can specify things like this (slightly modified sample taken from the System Administration Guide):          ...parameters:        variableParam1:         {                             label: 'Name of Share',                             type: 'String'                  },                  variableParam2                  {                             label: 'Share Size',                             type: 'size'                  },execute: ....};  Note the commas separating the sections of name, parameters, execute, and so on. This is important!Also - there is plenty of properties you can set on the parameters for your workflow, these are described in the Sun ZFS Storage System Administration Guide.Creating a Basic Workflow from a Basic ScriptTo make a basic script into a basic workflow, you need to wrap the following around your script to create a 'workflow' object:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function() {// (basic script goes here, minus the "script" at the beginning, and "." at the end)}};However, it appears (at least in my experience to date) that the workflow object may only be happy with one function in the execute parameter - either that or I'm doing something wrong. As far as I can tell, after execute: you should only have a basic one function context like so:execute: function(){}To deal with this, and to give an example similar to our script earlier, I have created another simple quota check, to show the same basic functionality, but in a workflow format:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function () {        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                run('select ' + projects[i]);                shares=list('filesystem');                printf("Project: %s\n", projects[i]);                for(j=0; j < shares.length; j++)                {                        run('select ' +shares[j]);                        try                        {                                run('users');                                printf("  SHARE: %s\n", shares[j]);                                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","-------");                                users=list();                                for(k=0; k < users.length; k++)                                {                                        run('select ' + users[k]);                                        username=get('name');                                        usage=get('usage');                                        quota=get('quota');                                        usage_g=usage / 1073741824; // convert bytes to gigabytes                                        quota_g=quota / 1073741824; // as above                                        quota_percent=0                                        if (quota > 0)                                        {                                                quota_percent=(usage / quota)*(100/1);                                        }                                        printf("    %20s        %8.2f   %8.2f   %d%%\n",username,usage_g,quota_g,quota_percent);                                        run('done');                                }                                run('done'); // exit user context                        }                        catch(err)                        {                        //      printf("    %s is a LUN, Not looking for users\n", shares[j]);                        }                        run('done'); // exit selected share context                }                run('done'); // exit project context        }        }};SummaryThe Sun ZFS Storage 7000 Appliance offers lots of different and interesting features to Sun/Oracle customers, including the world renowned Analytics. Hopefully the above will help you to think of new creative things you could be doing by taking advantage of one of the other neat features, the internal scripting engine!Some references are below to help you continue learning more, I'll update this post as I do the same! Enjoy...More information on ECMAScript 3A complete reference to ECMAScript 3 which will help you learn more of the details you may be interested in, can be found here:http://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262,%203rd%20edition,%20December%201999.pdfMore Information on Administering the Sun ZFS Storage 7000The Sun ZFS Storage 7000 System Administration guide can be a useful reference point, and can be found here:http://wikis.sun.com/download/attachments/186238602/2010_Q3_2_ADMIN.pdf

    Read the article

  • Desktop Fun: Music Icon Packs

    - by Asian Angel
    If you really love music and want to liven up your desktop then get ready to create a desktop concert with our Music Icon Packs collection. Note: To customize the icon setup on your Windows 7 & Vista systems see our article here. Using Windows XP? We have you covered here. Sneak Preview For our desktop example we decided to go with a touch of anime musical fun. The icons used are from the Guitar Icons set shown below. Note: Wallpaper can be found here. An up close look at the icons that we used… Notes icon set 1 *.png format only Download Notes icon set 2 *.png format only Download Notes icon set 3 *.png format only Download Notes icon set 4 *.png format only Download Big Band Set 1 *.ico format only Download Big Band Set 2 *.ico format only Download Acoustic Guitars *.ico and .png format Download Acoustic Guitar *.ico format only Download Guitar Icons *.ico format only Download Guiter Skulll *.ico format only Download Dented Music *.ico format only Download Music Icons *.ico format only Download Ipod Mini *.ico format only Download MP3 Players Icons *.ico format only Download MusicPhones icon *.ico and .png format Download Wanting more great icon sets to look through? Be certain to visit our Desktop Fun section for more icon goodness! Similar Articles Productive Geek Tips Desktop Fun: Video Game Icon PacksDesktop Fun: Sci-Fi Icons Packs Series 2Why Did Windows Vista’s Music Folder Icon Turn Yellow?Restore Missing Desktop Icons in Windows 7 or VistaAdd Home Directory Icon to the Desktop in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online

    Read the article

  • codesniffer command not being recognized after several installs and upgrades

    - by numerical25
    I've tried to install codesniffer using pear but my mac is not recognizing the phpcs command. pear config Configuration (channel pear.php.net): ===================================== Auto-discover new Channels auto_discover 1 Default Channel default_channel pear.php.net HTTP Proxy Server Address http_proxy <not set> PEAR server [DEPRECATED] master_server pear.php.net Default Channel Mirror preferred_mirror pear.php.net Remote Configuration File remote_config <not set> PEAR executables directory bin_dir /usr/local/pear/bin PEAR documentation directory doc_dir /usr/local/pear/docs PHP extension directory ext_dir /opt/local/lib/php/extensions/no-debug-non-zts-20090626 PEAR directory php_dir /usr/local/pear/share/pear PEAR Installer cache directory cache_dir /private/tmp/pear/cache PEAR configuration file cfg_dir /usr/local/pear/cfg directory PEAR data directory data_dir /usr/local/pear/data PEAR Installer download download_dir /tmp/pear/install directory PHP CLI/CGI binary php_bin /opt/local/bin/php php.ini location php_ini /opt/local/etc/php5/php.ini-development --program-prefix passed to php_prefix <not set> PHP's ./configure --program-suffix passed to php_suffix <not set> PHP's ./configure PEAR Installer temp directory temp_dir /tmp/pear/install PEAR test directory test_dir /usr/local/pear/tests PEAR www files directory www_dir /usr/local/pear/www Cache TimeToLive cache_ttl 3600 Preferred Package State preferred_state stable Unix file mask umask 22 Debug Log Level verbose 1 PEAR password (for password <not set> maintainers) Signature Handling Program sig_bin /usr/local/bin/gpg Signature Key Directory sig_keydir /opt/local/etc/pearkeys Signature Key Id sig_keyid <not set> Package Signature Type sig_type gpg PEAR username (for username <not set> maintainers) User Configuration File Filename /Users/anthonygordon/.pearrc System Configuration File Filename /opt/local/etc/pear.conf i checked php_bin and the php executable is there. when i run phpcs i get command not found Ive tried to upgrade pear, uninstall reinstall code sniffer, everything. when i run installs list i get Pear List Package Version State Archive_Tar 1.3.10 stable Console_Getopt 1.3.1 stable PEAR 1.9.4 stable PHP_CodeSniffer 1.4.0 stable Structures_Graph 1.0.4 stable XML_Util 1.2.1 stable

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • What are the packages/libraries I should install before compiling Python from source?

    - by Lennart Regebro
    Once in a while I need to install a new Ubuntu (I used it both for desktop and servers) and I always forget a couple of libraries I should have installed before compiling, meaning I have to recompile, and it's getting annoying. So now I want to make a complete list of all library packages to install before compiling Python (and preferably how optional they are). This is the list I compiled with below help and by digging in setup.py. It is complete for Ubuntu 10.04 and 11.04 at least: build-essential (obviously) libz-dev (also pretty common and essential) libreadline-dev (or the Python prompt is crap) libncursesw5-dev libssl-dev libgdbm-dev libsqlite3-dev libbz2-dev More optional: tk-dev libdb-dev Ubuntu has no packages for v1.8.5 of the Berkeley database, nor (for obvious reasons) the Sun audio hardware, so the bsddb185 and sunaudiodev modules will still not be built on Ubuntu, but all other modules are built with the above packages installed. Python 2.5 and Python 2.6 also needs to have LDFLAGS set on Ubuntu 11.04 and later, to handle the new multi-arch layout: export LDFLAGS="-L/usr/lib/$(dpkg-architecture -qDEB_HOST_MULTIARCH)" For Python 2.6 and 2.7 you also need to explicitly enable SSL after running the ./configure script and before running make. In Modules/Setup there are lines like this: #SSL=/usr/local/ssl #_ssl _ssl.c \ # -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ # -L$(SSL)/lib -lssl -lcrypto Uncomment these lines and change the SSL variable to /usr: SSL=/usr _ssl _ssl.c \ -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ -L$(SSL)/lib -lssl -lcrypto Python 2.6 also needs Modules/_ssl.c modified to be used with OpenSSL 1.0, which is used in Ubuntu 11.10. At around line 300 you'll find this: else if (proto_version == PY_SSL_VERSION_SSL3) self->ctx = SSL_CTX_new(SSLv3_method()); /* Set up context */ else if (proto_version == PY_SSL_VERSION_SSL2) self->ctx = SSL_CTX_new(SSLv2_method()); /* Set up context */ else if (proto_version == PY_SSL_VERSION_SSL23) self->ctx = SSL_CTX_new(SSLv23_method()); /* Set up context */ Change that into: else if (proto_version == PY_SSL_VERSION_SSL3) self->ctx = SSL_CTX_new(SSLv3_method()); /* Set up context */ #ifndef OPENSSL_NO_SSL2 else if (proto_version == PY_SSL_VERSION_SSL2) self->ctx = SSL_CTX_new(SSLv2_method()); /* Set up context */ #endif else if (proto_version == PY_SSL_VERSION_SSL23) self->ctx = SSL_CTX_new(SSLv23_method()); /* Set up context */ This disables SSL_v2 support, which apparently is gone in OpenSSL1.0.

    Read the article

  • Code contracts and inheritance

    - by DigiMortal
    In my last posting about code contracts I introduced you how to force code contracts to classes through interfaces. In this posting I will go step further and I will show you how code contracts work in the case of inherited classes. As a first thing let’s take a look at my interface and code contracts. [ContractClass(typeof(ProductContracts))] public interface IProduct {     int Id { get; set; }     string Name { get; set; }     decimal Weight { get; set; }     decimal Price { get; set; } }   [ContractClassFor(typeof(IProduct))] internal sealed class ProductContracts : IProduct {     private ProductContracts() { }       int IProduct.Id     {         get         {             return default(int);         }         set         {             Contract.Requires(value > 0);         }     }       string IProduct.Name     {         get         {             return default(string);         }         set         {             Contract.Requires(!string.IsNullOrWhiteSpace(value));             Contract.Requires(value.Length <= 25);         }     }       decimal IProduct.Weight     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 3);             Contract.Requires(value < 100);         }     }       decimal IProduct.Price     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 0);             Contract.Requires(value < 100);         }     } } And here is the product class that inherits IProduct interface. public class Product : IProduct {     public int Id { get; set; }     public string Name { get; set; }     public virtual decimal Weight { get; set; }     public decimal Price { get; set; } } if we run this code and violate the code contract set to Id we will get ContractException. public class Program {     static void Main(string[] args)     {         var product = new Product();         product.Id = -100;     } }   Now let’s make Product to be abstract class and let’s define new class called Food that adds one more contract to Weight property. public class Food : Product {     public override decimal Weight     {         get         {             return base.Weight;         }         set         {             Contract.Requires(value > 1);             Contract.Requires(value < 10);               base.Weight = value;         }     } } Now we should have the following rules at place for Food: weight must be greater than 1, weight must be greater than 3, weight must be less than 100, weight must be less than 10. Interesting part is what happens when we try to violate the lower and upper limits of Food weight. To see what happens let’s try to violate rules #2 and #4. Just comment one of the last lines out in the following method to test another assignment. public class Program {     static void Main(string[] args)     {         var food = new Food();         food.Weight = 12;         food.Weight = 2;     } } And here are the results as pictures to see where exceptions are thrown. Click on images to see them at original size. Violation of lower limit. Violation of upper limit. As you can see for both violations we get ContractException like expected. Code contracts inheritance is powerful and at same time dangerous feature. Although you can always narrow down the conditions that come from more general classes it is possible to define impossible or conflicting contracts at different points in inheritance hierarchy.

    Read the article

  • Languages like Tcl that have configurable syntax?

    - by boost
    I'm looking for a language that will let me do what I could do with Clipper years ago, and which I can do with Tcl, namely add functionality in a way other than just adding functions. For example in Clipper/(x)Harbour there are commands #command, #translate, #xcommand and #xtranslate that allow things like this: #xcommand REPEAT; => DO WHILE .T. #xcommand UNTIL <cond>; => IF (<cond>); ;EXIT; ;ENDIF; ;ENDDO LOCAL n := 1 REPEAT n := n + 1 UNTIL n > 100 Similarly, in Tcl I'm doing proc process_range {_for_ project _from_ dat1 _to_ dat2 _by_ slice} { set fromDate [clock scan $dat1] set toDate [clock scan $dat2] if {$slice eq "day"} then {set incrementor [expr 24 * 60]} if {$slice eq "hour"} then {set incrementor 60} set method DateRange puts "Scanning from [clock format $fromDate -format "%c"] to [clock format $toDate -format "%c"] by $slice" for {set dateCursor $fromDate} {$dateCursor <= $toDate} {set dateCursor [clock add $dateCursor $incrementor minutes]} { # ... } } process_range for "client" from "2013-10-18 00:00" to "2013-10-20 23:59" by day Are there any other languages that permit this kind of, almost COBOL-esque, syntax modification? If you're wondering why I'm asking, it's for setting up stuff so that others with a not-as-geeky-as-I-am skillset can declare processing tasks.

    Read the article

  • How to configure Logitech Marble trackball

    - by user27189
    You can configure it using xinput. I tested this in 11.10 and it works very nicely. This selection is from "Ubuntuwiki" Avoid using Hal for this release because it has known issues. Put the following into terminal, using gedit: Edit $HOME/bin/trackball.sh using this command: gedit $HOME/bin/trackball.sh Then paste this into the file: #!/bin/bash dev="Logitech USB Trackball" we="Evdev Wheel Emulation" xinput set-int-prop "$dev" "$we Button" 8 8 xinput set-int-prop "$dev" "$we" 8 1 # xinput set-int-prop "$dev" "$we" 8 1 # xinput set-int-prop "$dev" "$we Button" 8 9 # xinput set-int-prop "$dev" "$we X Axis" 8 6 7 # xinput set-int-prop "$dev" "$we Y Axis" 8 4 5 # xinput set-int-prop "$dev" "Drag Lock Buttons" 8 8 Make sure trackball.sh begins with #!/bin/bash. Make the script executable by running this: chmod +x $HOME/bin/trackball.sh` Add the following lines to $HOME/.bashrc, using gedit $HOME/.bashrc and put this in the file even if it is empty: xmodmap $HOME/.Xmodmap > /dev/null 2>&1 $HOME/bin/trackball.sh Edit $HOME/.Xmodmap using: gedit $HOME/.Xmodmap pointer = 1 8 3 4 5 6 7 9 Log out and back in and viola!

    Read the article

  • Wireless switch on Dell XT2 - strange behaviour of rfkill

    - by DyP
    I have an Dell Latitude XT2 using an Intel WLAN card (lspci lists it as "Intel Corporation Ultimate N WiFi Link 5300") running Lubuntu 12.04 with recent updates. The laptop has a hardware WLAN switch. I have problems activating the WLAN when booting with the hardware switch set to "off". The situation is a bit confusing, unfortunately. rfkill lists two WLAN devices (though lspci only shows the Intel one). This is the situation when booting with the hardware switch set to "Off": 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes From some tests, I conclude WLAN is only activated when both, the dell-wifi and phy0, are unblocked by soft- and hardware. But I can only unblock dell-wifi after the hardware switch is set to "on". Procedure right from boot with hardware switch set to "Off": Soft-unblocking phy0 works as expected. Could be done by start-up script. sudo rfkill unblock 0: nothing happens. Soft block of dell-wifi not removed. Set the hardware switch to "on": phy0 gets its hard block removed. Still no WLAN. sudo rfkill unblock 0: both the soft and hard lock of dell-wifi are removed. WLAN is now active and works. sudo rfkill block 0: only adds the soft block as expected. WLAN goes off again. So, in order to activate WLAN, I have to use the hardware switch and afterwards (manually) run a script - that's a bit inconvenient. Does someone know a better solution? Maybe a daemon could help that listens to rfkill events to unblock dell-wifi after I have set the hardware switch to "on"? (sounds like another workaround) When booting with the hardware switch set to "On", nothing is blocked neither hard nor soft.

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Where we should put validation for domain model

    - by adisembiring
    I still looking best practice for domain model validation. Is that good to put the validation in constructor of domain model ? my domain model validation example as follows: public class Order { private readonly List<OrderLine> _lineItems; public virtual Customer Customer { get; private set; } public virtual DateTime OrderDate { get; private set; } public virtual decimal OrderTotal { get; private set; } public Order (Customer customer) { if (customer == null) throw new ArgumentException("Customer name must be defined"); Customer = customer; OrderDate = DateTime.Now; _lineItems = new List<LineItem>(); } public void AddOderLine //.... public IEnumerable<OrderLine> AddOderLine { get {return _lineItems;} } } public class OrderLine { public virtual Order Order { get; set; } public virtual Product Product { get; set; } public virtual int Quantity { get; set; } public virtual decimal UnitPrice { get; set; } public OrderLine(Order order, int quantity, Product product) { if (order == null) throw new ArgumentException("Order name must be defined"); if (quantity <= 0) throw new ArgumentException("Quantity must be greater than zero"); if (product == null) throw new ArgumentException("Product name must be defined"); Order = order; Quantity = quantity; Product = product; } } Thanks for all of your suggestion.

    Read the article

  • Is there a setting in Exchange Server 2007 that we can set to make these headers propogate and be received by a POP/IMAP client?

    - by Ruruboy
    When using EWS Managed API to send Email via Exchange Server 2007. I noticed that MAPI clients like MS Outlook display all custom headers. But when I use POP3/IMAP clients like MS Outlook Express. I have noticed that these custom headers do not display in the message opened from MS Outlook Express. Is there a setting in Exchange Server 2007 that we can set to make these custom headers propagate and be received by a POP/IMAP client? Also why do custom headers in example below display up in lower case in MAPI clients like MS Outlook? But surprisingly if we use SMTPClient class to send email then these headers display as sent with Case Sensitive letters. eg. Header. Example of Headers received by a MAPI client like MS Outlook via Exchange Server 2007 Received: from EXMAILVS1.blabla.com ([192.168.191.136]) by cashtp02.blabla.com ([XXX.XXX.XX.XXX]) with mapi; Mon, 20 Dec 2010 12:17:05 -0800 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: asfsdf <[email protected]> To: asdsdf <[email protected]> Date: Mon, 20 Dec 2010 12:17:04 -0800 Subject: Please send me this header Thread-Topic: Please send me this header Thread-Index: AQHLoILek7g5cFgHQU6lHHfiKkdUMg== Message-ID: <[email protected]> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-Exchange-Organization-SCL: -1 X-MS-TNEF-Correlator: <[email protected]> customheader1: hello ali customheader2: hello Jace MIME-Version: 1.0

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >