Search Results

Search found 11961 results on 479 pages for 'boot arguments'.

Page 171/479 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • How to inject a "runtime" dependency like a logged in user which is not available at application boot time?

    - by Fabian
    I'm just not getting this: I use Gin in my java GWT app to do DI. The login screen is integrated into the full application window. After the user has logged in I want to inject the user object into other classes like GUI Presenters which I create, so I have some sort of runtime dependency I believe. How do i do that? One solution I can think of is sth like: class Presenter { @Inject Presenter(LoggedInUserFactory userFactory) { User user = userFactory.getLoggedInUser(); } } class LoggedInUserFactoryImpl { public static User user; User getLoggedInUser() { return user; } } So, when the user is successfully logged in and I have the object i set the static property in LoggedInUserFactory, but this will only work if the Presenter is created after the user has logged in which is not the case. Or should I use a global static registry? I just don't like the idea of having static dependencies in my classes. Any input is greatly appreciated.

    Read the article

  • question about qsort in c++

    - by davit-datuashvili
    i have following code in c++ #include <iostream> using namespace std; void qsort5(int a[],int n){ int i; int j; if (n<=1) return; for (i=1;i<n;i++) j=0; if (a[i]<a[0]) swap(++j,i,a); swap(0,j,a); qsort5(a,j); qsort(a+j+1,n-j-1); } int main() { return 0; } void swap(int i,int j,int a[]) { int t=a[i]; a[i]=a[j]; a[j]=t; } i have problem 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(16) : error C2661: 'qsort' : no overloaded function takes 2 arguments 1>Build log was saved at "file://c:\Users\dato\Documents\Visual Studio 2008\Projects\qsort5\qsort5\Debug\BuildLog.htm" please help

    Read the article

  • How do I view the CMake command line statement that Qt Creator executes?

    - by Evan
    I'm attempting to debug a command line CMake failure. The same CMake file works in Qt Creator, with the arguments in the Qt Creator window matching what I have entered on the command line. This makes me think Qt Creator is adding some extra arguments, which makes sense since the generator drop down has several options that specify architecture and CMake version. Is there a way to get the CMake command that Qt Creator executed to produce the desired result, specifically the arguments passed to the CMake executable? I found one post that talks about viewing the CMakeCache files to do some forensics, but this only proves there are differences, it doesn't quickly show me what arguments to change.

    Read the article

  • Trouble creating calendar in Google API via Coldfusion

    - by KingErroneous
    I am trying to create a caledar using the Google API, and it just returns the list of calendars in my account, just like I sent a GET request. Here is my code: <cfxml variable="locals.xml"> <cfoutput> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:gd="http://schemas.google.com/g/2005" xmlns:gCal="http://schemas.google.com/gCal/2005"> <title type="text">#arguments.argTitle#</title> <summary type="text">#arguments.argSummary#</summary> <cfif len(arguments.argTimezone)><gCal:timezone value="#arguments.argTimezone#"></gCal:timezone></cfif> <gCal:hidden value="false"></gCal:hidden> <gCal:accesslevel value="owner" /> <gCal:color value="#arguments.argColor#"></gCal:color> <gd:where rel='' label='' valueString='Oakland'></gd:where> </entry> </cfoutput> </cfxml> <cfhttp url="#variables.baseURL#/default/owncalendars/full" method="post" redirect="false" multiparttype="related" charset="utf-8"> <cfhttpparam type="header" name="Authorization" value="GoogleLogin auth=#getAuth(variables.serviceName)#"> <cfhttpparam type="header" name="Content-Type" value="application/atom+xml"> <cfhttpparam type="header" name="GData-Version" value="2"> <cfhttpparam type="body" value="#trim(locals.xml)#"> </cfhttp> Any help would be appreciated.

    Read the article

  • Specifying different initial values for fields in inherited models (django)

    - by Shawn Chin
    Question : What is the recommended way to specify an initial value for fields if one uses model inheritance and each child model needs to have different default values when rendering a ModelForm? Take for example the following models where CompileCommand and TestCommand both need different initial values when rendered as ModelForm. # ------ models.py class ShellCommand(models.Model): command = models.Charfield(_("command"), max_length=100) arguments = models.Charfield(_("arguments"), max_length=100) class CompileCommand(ShellCommand): # ... default command should be "make" class TestCommand(ShellCommand): # ... default: command = "make", arguments = "test" I am aware that one can used the initial={...} argument when instantiating the form, however I would rather store the initial values within the context of the model (or at least within the associated ModelForm). My current approach What I'm doing at the moment is storing an initial value dict within Meta, and checking for it in my views. # ----- forms.py class CompileCommandForm(forms.ModelForm): class Meta: model = CompileCommand initial_values = {"command":"make"} class TestCommandForm(forms.ModelForm): class Meta: model = TestCommand initial_values = {"command":"make", "arguments":"test"} # ------ in views FORM_LOOKUP = { "compile": CompileCommandFomr, "test": TestCommandForm } CmdForm = FORM_LOOKUP.get(command_type, None) # ... initial = getattr(CmdForm, "initial_values", {}) form = CmdForm(initial=initial) This feels too much like a hack. I am eager for a more generic / better way to achieve this. Suggestions appreciated. Other attempts I have toyed around with overriding the constructor for the submodels: class CompileCommand(ShellCommand): def __init__(self, *args, **kwargs): kwargs.setdefault('command', "make") super(CompileCommand, self).__init__(*args, **kwargs) and this works when I try to create an object from the shell: >>> c = CompileCommand(name="xyz") >>> c.save() <CompileCommand: 123> >>> c.command 'make' However, this does not set the default value when the associated ModelForm is rendered, which unfortunately is what I'm trying to achieve. Update 2 (looks promising) I now have the following in forms.py which allow me to set Meta.default_initial_values without needing extra code in views. class ModelFormWithDefaults(forms.ModelForm): def __init__(self, *args, **kwargs): if hasattr(self.Meta, "default_initial_values"): kwargs.setdefault("initial", self.Meta.default_initial_values) super(ModelFormWithDefaults, self).__init__(*args, **kwargs) class TestCommandForm(ModelFormWithDefaults): class Meta: model = TestCommand default_initial_values = {"command":"make", "arguments":"test"}

    Read the article

  • What's a good way to provide additional decoration/metadata for Python function parameters?

    - by Will Dean
    We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment. We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments. So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times. We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function. However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description. So, the question is, what are good 'pythonic' ways of adding this sort of information to a function. The two possibilities I have thought of are: Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec) Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata. Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.

    Read the article

  • Question about my sorting algorithm in C++

    - by davit-datuashvili
    i have following code in c++ #include <iostream> using namespace std; void qsort5(int a[],int n){ int i; int j; if (n<=1) return; for (i=1;i<n;i++) j=0; if (a[i]<a[0]) swap(++j,i,a); swap(0,j,a); qsort5(a,j); qsort(a+j+1,n-j-1); } int main() { return 0; } void swap(int i,int j,int a[]) { int t=a[i]; a[i]=a[j]; a[j]=t; } i have problem 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(13) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::basic_string<_Elem,_Traits,_Alloc> &,std::basic_string<_Elem,_Traits,_Alloc> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\xstring(2203) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(std::pair<_Ty1,_Ty2> &,std::pair<_Ty1,_Ty2> &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(76) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(14) : error C2780: 'void std::swap(_Ty &,_Ty &)' : expects 2 arguments - 3 provided 1> c:\program files\microsoft visual studio 9.0\vc\include\utility(16) : see declaration of 'std::swap' 1>c:\users\dato\documents\visual studio 2008\projects\qsort5\qsort5\qsort5.cpp(16) : error C2661: 'qsort' : no overloaded function takes 2 arguments 1>Build log was saved at "file://c:\Users\dato\Documents\Visual Studio 2008\Projects\qsort5\qsort5\Debug\BuildLog.htm" please help

    Read the article

  • Why is the order of evaluation for function parameters undefined in c++?

    - by kunj2aan
    The standard doesn't specify the order of evaluation of arguments with this line: The order of evaluation of arguments is unspecified. What does Better code can be generated in the absence of restrictions on expression evaluation order imply? What is the drawback in asking all the compilers to evaluate the function arguments Left to Right for example? What kinds of optimizations do compilers perform because of this undefined spec?

    Read the article

  • exclude private property from print_r or object?

    - by Hailwood
    Basically I am using Code Igniter, and the Code Igniter base class is huge, when I print_r some of my objects they have the base class embedded inside them. this makes it a pain to get the information I actually wanted (the rest of the properties). So, I am wondering if there is a way I can hide, or remove the base class object? I have tried clone $object; unset($object->ci); print_r($object); but of course the ci property is private. the actual function I am using for dumping is: /** * Outputs the given variables with formatting and location. Huge props * out to Phil Sturgeon for this one (http://philsturgeon.co.uk/blog/2010/09/power-dump-php-applications). * To use, pass in any number of variables as arguments. * Optional pass in "true" as final argument to kill script after dump * * @return void */ function dump() { list($callee) = debug_backtrace(); $arguments = func_get_args(); $total_arguments = count($arguments); if (end($arguments) === true) $total_arguments--; echo '<fieldset style="background: #fefefe !important; border:2px red solid; padding:5px">'; echo '<legend style="background:lightgrey; padding:5px;">' . $callee['file'] . ' @ line: ' . $callee['line'] . '</legend><pre>'; $i = 0; foreach ($arguments as $argument) { //if the last argument is true we don't want to display it. if ($i == ($total_arguments) && $argument === true) break; echo '<br/><strong>Debug #' . (++$i) . ' of ' . $total_arguments . '</strong>: '; if ((is_array($argument) || is_object($argument)) && count($argument)) { print_r($argument); } else { var_dump($argument); } } echo '</pre>' . PHP_EOL; echo '</fieldset>' . PHP_EOL; //if the very last argument is "true" then die if (end($arguments) === true) die('Killing Script'); }

    Read the article

  • Is there unresizable space in latex? Pictures in good looking grid.

    - by drasto
    I've created latex macro to typeset guitar chords diagrams(using picture environment). Now I want to make diagrams of different appear in good looking grid when typeset one next to each other as the picture shows: The picture. (on the picture: Labeled "First" bad layout of diagrams, labeled "Second" correct layout when equal number of diagrams in line) I'm using \hspace to make some skips between diagrams, otherwise they would be too near to each other. As you can see in second case when latex arrange pictures in so that there is same number of them in each line it works. However if there is less pictures in the last line they become "shifted" to the right. I don't want this. I guess is because latex makes the space between diagrams in first line a little longer for the line to exactly fit the page width. How do I tell latex not to resize spaces created by \hspace ? Or is there any other way ? I guess I cannot use tables because I don't know how many diagrams will fit in one line... This is current state of code: \newcommand{\spaceForChord}{1.7cm} \newcommnad{\chordChart}[1]{% %calculate dimensions xdim and ydim according to setings \begin{picture}(xdim, ydim){% %draw the diagram inside defined area }% \hspace*{\spaceForChord}% \hspace*{-\xdim}% }% %end preambule and begin document \begin{document} First:\\* \\* \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} %...above line is repeated 12 more times to produce result shown at the picture \end{document} Thanks for any help.

    Read the article

  • How to implement tail calls in a custom VM

    - by DeadMG
    How can I implement tail calls in a custom virtual machine? I know that I need to pop off the original function's local stack, then it's arguments, then push on the new arguments. But, if I pop off the function's local stack, how am I supposed to push on the new arguments? They've just been popped off the stack.

    Read the article

  • Executing a process in windows server 2003 and ii6 from code - permissions error

    - by kurupt_89
    I have a problem executing a process from our testing server. On my localhost using windows XP and iis5.1 I changed the machine.config file to have the line - I then changed the login for iis to log on as local system account and allow server to interact with desktop. This fixed my problem executing a process from code in xp. When using the same method on windows server 2003 (using iis6 isolation mode) the process does not get executed. Here is the code to execute the process (I have tested the inputs to iecapt through the command line and an image is generated) - public static void GenerateImageToDisk(string ieCaptPath, string url, string path, int delay) { url = FixUrl(url); ieCaptPath = FixPath(ieCaptPath); string arguments = @"--url=""{0}"" --out=""{1}"" --min-width=0 --delay={2}"; arguments = string.Format(arguments, url, path, delay); ProcessStartInfo ieCaptProcessStartInfo = new ProcessStartInfo(ieCaptPath + "IECapt.exe"); ieCaptProcessStartInfo.RedirectStandardOutput = true; ieCaptProcessStartInfo.WindowStyle = ProcessWindowStyle.Hidden; ieCaptProcessStartInfo.UseShellExecute = false; ieCaptProcessStartInfo.Arguments = arguments; ieCaptProcessStartInfo.WorkingDirectory = ieCaptPath; Process ieCaptProcess = Process.Start(ieCaptProcessStartInfo); ieCaptProcess.WaitForExit(600000); ieCaptProcess.Close(); }

    Read the article

  • Interacts with dialog/whiptail on early boot rcX.d stage?

    - by nm
    Hi buddies, I'm developing on Ubuntu based, actually I got one script in-charged on GUI(console) setup. It runs before another scripts (rcX.d) start. Currently, I installed this script on rc2.d and start earlier than other ones. But when run on real machine, I can't input any keystroke on "dialog --inputbox" or whiptail through shell script. Additionally, It runs well on my Virtual Machine (Virtual Box and Vmware), that's so strange! So, does anybody give some help or point me any clues for overcome this ? Thanks

    Read the article

  • Understanding "this" keyword

    - by Raffaele
    In this commit there is a change I cannot explain deferred.done.apply( deferred, arguments ).fail.apply( deferred, arguments ); becomes deferred.done( arguments ).fail( arguments ); AFAIK, when you invoke a function as a member of some object like obj.func(), inside the function this is bound to obj, so there would be no use invoking a function through apply() just to bound this to obj. Instead, according to the comments, this was required because of some preceding $.Callbacks.add implementation. My doubt is not about jQuery, but about the Javascript language itself: when you invoke a function like obj.func(), how can it be that inside func() the this keyword is not bound to obj?

    Read the article

  • need help in writing ant target

    - by magic1234
    I am new to writing ant targets. I've to pass 3 arguments from the command line to java program. All these can be optional and can have spaces. How can i do that using ant target. I tried using but if it delimits the arguments by space. So, if my first argument has space then it takes the word after space as second argument. I also tried using It takes spaces but makes the arguments mandatory. Im my java code, i've to set default values for these arguments if any of these is missing. How can i do that.

    Read the article

  • Use of .apply() with 'new' operator. Is this possible?

    - by Premasagar
    In JavaScript, I want to create an object instance (via the new operator), but pass an arbitrary number of arguments to the constructor. Is this possible? What I want to do is something like this (but the code below does not work): function Something(){ // init stuff } function createSomething(){ return new Something.apply(null, arguments); } var s = createSomething(a,b,c); // 's' is an instance of Something The Answer From the responses here, it became clear that there's no in-built way to call .apply() with the new operator. However, people suggested a number of really interesting solutions to the problem. My preferred solution was this one from Matthew Crumley (I've modified it to pass the arguments property): var createSomething = (function() { function F(args) { return Something.apply(this, args); } F.prototype = Something.prototype; return function() { return new F(arguments); } })();

    Read the article

  • Fedora 17 keeps using fedora 16 kernel

    - by MTilsted
    I did run preupgrade to upgrade my Fedora 16(x64) to Fedora 17. And it seemed to work fine. So I got the new gimp 2.8, gcc 4.7.0 and so on. But the system keeps using the old kernel from fc16. Uname -a gives me: Linux localhost.localdomain 3.3.6-3.fc16.x86_64 #1 SMP Wed May 16 21:43:01 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system downloaded the new kernel, so I got /boot/vmlinuz-3.3.7-1.fc17.x86_64 /boot/System.map-3.3.7-1.fc17.x86_64 /boot/initramfs-3.3.7-1.fc17.x86_64.img /boot/config-3.3.7-1.fc17.x86_64 But the system keeps using the old kernel from fc16. If i look at my /boot/grub2/grub.cfg file, it looks like this: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub2-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } set timeout=5 ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Fedora (3.3.6-3.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.6-3.fc16.x86_64)' linux /vmlinuz-3.3.6-3.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.6-3.fc16.x86_64.img } menuentry 'Fedora (3.3.5-2.fc16.x86_64)' --class fedora --class gnu-linux --class gnu --class os { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3521a578-5829-4fb4-a485-8c097df77d07 echo 'Loading Fedora (3.3.5-2.fc16.x86_64)' linux /vmlinuz-3.3.5-2.fc16.x86_64 root=UUID=57459a16-97a0-46a4-8e71-cc3ec0ca4a3e ro KEYTABLE=dvorak rd.lvm=0 rd.dm=0 quiet SYSFONT=latarcyrheb-sun16 rhgb rd.md.uuid=60956781:734d95ba:424311e2:796702a7 rd.luks=0 LANG=en_US.UTF-8 echo 'Loading initial ramdisk ...' initrd /initramfs-3.3.5-2.fc16.x86_64.img } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### ### BEGIN /etc/grub.d/90_persistent ### ### END /etc/grub.d/90_persistent ### Anyone got a clue about why it still only references the fc16 kernel, and how I can upgrade it. My system is using raid1 on 2 disks, but /boot is not using raid. Mount for /boot is: /dev/sda2 on /boot type ext2 (rw,relatime,seclabel,user_xattr,acl,barrier=1) And / (The only other filesystem I have) is mounted as /dev/md0 on / type ext4 (rw,relatime,seclabel,user_xattr,acl,barrier=1,data=ordered)

    Read the article

  • Windows 7 / Ubuntu Dualboot GRUB Problem.

    - by Tek
    I'd like to first say ahead of time that I'm running a RAID-0 Setup. 1.First of all, I'm glad Ubuntu 9.10 installed flawlessly and detected my RAID-0 setup just fine. The issue I'm having now is that I already had Windows 7 installed and made a small 12GB partition for Linux/Swap. I grabbed EasyBCD 2.0 to edit the W7 bootloader and configured it to use dual boot Grub2 because before it didn't even show the option for Ubuntu. The bootloader points to a file made in the windows directory made by EasyBCD called "C:\NST\AutoNeoGrub0.mbr" which is what I'm guessing grub is booting from. After that I got the option for booting Ubuntu. The problem is that it's sending me to the Grub prompt (probably because it's pointing to \NST|AutoNeoGrub0.mbr?), at first I didn't know what to do but I researched and have to type grub commands to manually boot into Ubuntu Linux. Ex: grubroot (hd0,4) grubkernel /boot/vmlinuz-2.6... root=/dev/disk/by-uuid/24624-2424... grubinitrd boot/initrd.img-2.6... grubboot After all that Ubuntu boots just fine, but how do I fix it permanently? Do I need to edit the bootloader manually (since Easy BCD "autoconfigures")? Some insight on this would rock! Also, it sucks to type the actual uuid since it's REALLY long. I tried getting the name of the drive via fdisk -l but since it's raid 0 I'm guessing I can't do that. How can I get a shorter name of the drive? like /dev/sda, /dev/sdb etc? I've also tried to update to the latest GRUB and I got this: Creating config file /etc/default/grub with new version Generating core.img error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' grub-probe: error: no mapping exists for nvidia_dbedfcca5' Auto-detection of a filesystem module failed. Please specify the module with the option--modules' explicitly. dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of grub2: grub2 depends on grub-pc; however: Package grub-pc is not configured yet. dpkg: error processing grub2 (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. E: Sub-process /usr/bin/dpkg returned an error code (1) I've also tried: b@dnb:~$ sudo update-grub error: cannot seek /dev/sdc' error: cannot seek/dev/sdc' Generating grub.cfg ... Found linux image: /boot/vmlinuz-2.6.31-14-generic Found initrd image: /boot/initrd.img-2.6.31-14-generic error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca5' Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/mapper/nvidia_dbedfcca1 error: cannot seek /dev/sdc' grub-probe: error: no mapping exists fornvidia_dbedfcca1' done To no avail. Any idea what I can do to fix this mess? :( Edit: This is my disk configuration. b@dnb:~$ sudo df -l Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/nvidia_dbedfcca5 12302232 2744788 8932520 24% / udev 1030288 268 1030020 1% /dev none 1030288 964 1029324 1% /dev/shm none 1030288 92 1030196 1% /var/run none 1030288 0 1030288 0% /var/lock none 1030288 0 1030288 0% /lib/init/rw /dev/sr0 706532 706532 0 100% /media/cdrom0 Note: /dev/mapper/nvidia_dbedfcca5 is my Linux boot partition

    Read the article

  • Can't re-mount existing RAID10 on Ubuntu

    - by Zoran
    I saw similar questions, but didn't find what solution to my problem. After power-cut, one of RAID10 (4 disks were) appears to be malfunctioning. I make tha array active one, but can not mount it. Always the same error: mount: you must specify the filesystem type So, here is what I have when type mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Tue Sep 1 11:00:40 2009 Raid Level : raid10 Array Size : 1465148928 (1397.27 GiB 1500.31 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jun 11 09:54:27 2012 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : 1a02e789:c34377a1:2e29483d:f114274d Events : 0.166 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde At the /etc/mdadm/mdadm.conf I have by default, scan all partitions (/proc/partitions) for MD superblocks. alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes automatically tag new arrays as belonging to the local system HOMEHOST <system> instruct the monitoring daemon where to send mail alerts MAILADDR root definitions of existing MD arrays ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d This file was auto-generated... So, my question is, how can I mount md0 array (md1 has been mounted without problem) in order to preserve existing data? One more thing, fdisk -l command gives the following result: Disk /dev/sdb: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/sdb1 * 1 88217 708603021 83 Linux /dev/sdb2 88218 91201 23968980 5 Extended /dev/sdb5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdc: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0008f8ae Device Boot Start End Blocks Id System /dev/sdc1 1 88217 708603021 83 Linux /dev/sdc2 88218 91201 23968980 5 Extended /dev/sdc5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdd: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x4be1abdb Device Boot Start End Blocks Id System Disk /dev/sde: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa4d5632e Device Boot Start End Blocks Id System Disk /dev/sdf: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/sdg: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/md1: 750.1 GB, 750156251136 bytes 2 heads, 4 sectors/track, 183143616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x7b6e of partition table 5 will be corrected by w(rite) Disk /dev/md0: 1500.3 GB, 1500312502272 bytes 255 heads, 63 sectors/track, 182402 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/md0p1 * 1 88217 708603021 83 Linux /dev/md0p2 88218 91201 23968980 5 Extended /dev/md0p5 ? 121767 155317 269488144 20 Unknown And one more thing. When using mdadm --examine command, here ise result: mdadm -v --examine --scan /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sd ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d devices=/dev/sdf ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde md0 has 3 devices which are active. Can someone instruct me how to solve this issue? If it is possible, I would like not to removing faulty HDD. Please advise

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • Customize Team Build 2010 – Part 16: Specify the relative reference path

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application Part 16: Specify the relative reference path As I have already blogged about, it is not intuitive how to specify the paths where the build server has to look for references that are stored in Source Control. It is a common practice to store 3rd party libraries in Source Control, so they are available to everyone, everyone uses the same version of the libraries and updating a library can be done centrally. In Team Build 2010 these paths are specified as a parameter for MSBuild. What we will do in this post is building the values for this parameter based on the values in an argument. You are now pretty aware how to customize the build template, so let’s do the modifications in another way. Instead of opening the xaml file in the workflow designer, we open it in the XML editor. You can open it in the XML Editor by either selecting the Open with menu (see the context menu), or by choosing the View code option. To add this functionality we need to: Specify a new argument Add the argument to the metadata Build the absolute paths for the references and add these paths to the MSBuild arguments 1. Specify a new argument Locate at the top of the document the Members (which are the arguments) of the XAML and add the following line <x:Property Name="ReferencePaths" Type="InArgument(s:String[])" /> 2. Add the argument to the metadata Then locate the line <mtbw:ProcessParameterMetadataCollection> and paste the following line <mtbw:ProcessParameterMetadata Category="Misc" Description="The list of reference paths, relative to the root path in the Workspace mapping." DisplayName="Reference paths" ParameterName="ReferencePaths" /> 3. Build the absolute paths for the references and add these paths to the MSBuild arguments Now locate the place where the assignments are done to the variables used in the agent. And add the following lines after the last Assign activity         <Sequence DisplayName="Initialize ReferencePath" sap:VirtualizedContainerService.HintSize="464,428">           <Sequence.Variables>             <Variable x:TypeArguments="x:String" Name="ReferencePathsArgument">               <Variable.Default>                 <Literal x:TypeArguments="x:String" Value="" />               </Variable.Default>             </Variable>           </Sequence.Variables>           <sap:WorkflowViewStateService.ViewState>             <scg:Dictionary x:TypeArguments="x:String, x:Object">               <x:Boolean x:Key="IsExpanded">True</x:Boolean>             </scg:Dictionary>           </sap:WorkflowViewStateService.ViewState>           <ForEach x:TypeArguments="x:String" DisplayName="Iterate through the paths" sap:VirtualizedContainerService.HintSize="287,206" mtbwt:BuildTrackingParticipant.Importance="Low" Values="[ReferencePaths]">             <ActivityAction x:TypeArguments="x:String">               <ActivityAction.Argument>                 <DelegateInArgument x:TypeArguments="x:String" Name="path" />               </ActivityAction.Argument>               <Assign x:TypeArguments="x:String" DisplayName="Build ReferencePath argument" sap:VirtualizedContainerService.HintSize="257,100" mtbwt:BuildTrackingParticipant.Importance="Low"  To="[ReferencePathsArgument]" Value="[If(String.IsNullOrEmpty(ReferencePathsArgument), &quot;&quot;, ReferencePathsArgument + &quot;;&quot;) + IO.Path.Combine(SourcesDirectory, path)]" />             </ActivityAction>           </ForEach>           <Assign DisplayName="Append the reference paths to the MSBuild Arguments" sap:VirtualizedContainerService.HintSize="287,58">             <Assign.To>               <OutArgument x:TypeArguments="x:String">[MSBuildArguments]</OutArgument>             </Assign.To>             <Assign.Value>               <InArgument x:TypeArguments="x:String">[String.Format("{0} /p:ReferencePath=""{1}""", MSBuildArguments, ReferencePathsArgument)]</InArgument>             </Assign.Value>           </Assign>         </Sequence> Now you can use the template to specify the paths relative to SourcesDirectory. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >