Search Results

Search found 90770 results on 3631 pages for 'first time'.

Page 6/3631 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Need leading zero for batch script using %time% variable

    - by Ira
    Hi, I came across a bug in my DOS script that uses date and time data for file naming. The problem was I ended up with a gap because the time variable didn't automatically provide leading zero for hour < 10. So running echo %time% gives back: ' 9:29:17.88'. Does anyone know of a way to conditionally pad leading zeros to fix this? More info: My filename set command is: set logfile=C:\Temp\robolog_%date:~-4%%date:~4,2%%date:~7,2%_%time:~0,2%%time:~3,2%%time:~6,2%.log which ends up being: C:\Temp\robolog_20100602_ 93208.log (for 9:23 in the morning). This question is related to this one. Thanks

    Read the article

  • Config Time Service on Server 2008 DC using Group Policy Only

    - by Ed Fries
    I want to configure the Time Service using only GP in a Server 2008 R2 domain. I have created a GP as follows: Computer Config, Policies, Administrative Templates, System, Windows Time Policy: =Global Configuration Settings -Enabled w/ default settings. Computer Config, Policies, Administrative Templates, System, Windows Time Policy,Time Providers: =Configure Windows NTP Client -Enabled w/ default settings. =Enable Windows NTP Client -Enabled w/ default settings. =Enable Windows NTP Server -Enabled w/ default settings. The policy is linked, enforced and applied to Domain Controllers OU. The GP modeling results shows the policy is in effect on the DC (Single DC domain) and the DC is recognized as the PDC emulator. I have run gpupdate /force and logged off/on. The issue is that the DC shows the time source as internal. I understand I can force this at the cmd line using w32tm to set the peer but I would like to understand what is missing in the GP. The default NTP Client GP setting includes time.windows.com,0x9 as the source but it does not appear to be taking effect.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Restore using time machine from an macbook to an macbook pro (first intel)

    - by Anders Nørgaard
    Hello.. My girlfriend have Macbook 10.6.3, the first plastic version. the screen broke an its at service store now. In the mean time, i have tried to restore from hers TM backup to my old macbook pro 10.6.3 (the first intel version). Everything seems to work out fine, but when its finish, it says reboot, but nothing happens. When i hold down the power button, powering down, and starts again, its come up with the grey roll down screen "you need to restart your machine again" in different languages. I have tried the restore procedure over again 2 times, and every time it ends up like this... Anyone have a suggestion what to do ? Thanks - Anders.

    Read the article

  • iPhoto - add time to photo time

    - by Nippysaurus
    I have taken some photos from a recent vacation, but forgot to set the "away" time, so the time is slightly off. Thats not much of an issue since its only an hour from my home time, but my partner also took photos, but she was smart enough to adjust the time, so when merged together the overlap is annoying. Is there an easy way (preferably in iPhoto) to adjust the time that the photos were taken?

    Read the article

  • Cannot access client pc remotely due to time/date issue xp win2k3 environment -- REMOTE solution please

    - by Detritus Maximus
    When I run psexec to the user desktop (xp pro) I get "There is a time and/or date difference between the client and the server." I also get "access denied" when I run the at \clientname time /interactive "net time \server /set /y" command. I cannot access the machine from my win2k3 server's AD Users and Computers utilities. Is going to the machine the only way to remedy? Clarify: Going to the machine and doing the net time command works, but I want a remote solution please.

    Read the article

  • How to make Jenkins CI use Local time instead of UTC on debian squeeze

    - by drgn
    I have a Jenkins-ci installation on a debian squeeze. Current default time zone: 'America/Toronto' Local time is now: Mon Jul 9 16:00:57 EDT 2012. Universal Time is now: Mon Jul 9 20:00:57 UTC 2012. In the /etc/default/rcS file i have : UTC=no Unfortunately this is not working, In the system information of jenkins: user.timezone Etc/UTC I searched for a few hour.. unfortunately could not find a fix any help would be greatly appreciated. Thank for your time

    Read the article

  • How to shedule time machine backup

    - by AntonAL
    Hi, the standard backup interval for Time Machine is 1 hour. In plist-file, i changed it to one day. But, i need more tweak - to launch Time Machine backup at the specified time of day. I prefer making backup, when my work day is completed. How can i customize Time Machine do to so ? Thanks!

    Read the article

  • Cannot access client pc remotely due to time/date issue xp win2k3 environment

    - by Flotsam N. Jetsam
    When I run psexec to the user desktop (xp pro) I get "There is a time and/or date difference between the client and the server." I also get "access denied" when I run the at \clientname time /interactive "net time \server /set /y" command. I cannot access the machine from my win2k3 server's AD Users and Computers utilities. Is going to the machine the only way to remedy? Clarify: Going to the machine and doing the net time command works, but I want a remote solution please.

    Read the article

  • Time server for Windows 2003 domain

    - by Dave
    Am I correct that the NET TIME command should return the time from the PDC for the domain? If so, the issue we are contending with is that NET TIME command returns \randomfileserver. How do I reset time server for domain to be the PDC?

    Read the article

  • iPhone first responders

    - by William Jockusch
    I am confused about the iPhone responder chain. Specifically, in the iPhone event handling guide http://developer.apple.com/iPhone/library/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/EventHandling/EventHandling.html, we have the following: The first responder is the responder object in an application (usually a UIView object) that is designated to be the first recipient of events other than touch events. But UIView is a subclass of UIResponder. And the UIResponder class reference says this: - (BOOL)canBecomeFirstResponder Return Value YES if the receiver can become the first responder, NO otherwise. Discussion Returns NO by default. If a responder object returns YES from this method, it becomes the first responder and can receive touch events and action messages. Subclasses must override this method to be able to become first responder. I am confused by the apparent contradiction. Can anyone clear it up for me? For what it's worth, I did set up a simple view-based application, and call canBecomeFirstResponder and isFirstResponder on its view. Both returned NO.

    Read the article

  • Depth First Search Basics

    - by cam
    I'm trying to improve my current algorithm for the 8 Queens problem, and this is the first time I'm really dealing with algorithm design/algorithms. I want to implement a depth-first search combined with a permutation of the different Y values described here: http://en.wikipedia.org/wiki/Eight_queens_puzzle#The_eight_queens_puzzle_as_an_exercise_in_algorithm_design I've implemented the permutation part to solve the problem, but I'm having a little trouble wrapping my mind around the depth-first search. It is described as a way of traversing a tree/graph, but does it generate the tree graph? It seems the only way that this method would be more efficient only if the depth-first search generates the tree structure to be traversed, by implementing some logic to only generate certain parts of the tree. So essentially, I would have to create an algorithm that generated a pruned tree of lexigraphic permutations. I know how to implement the pruning logic, but I'm just not sure how to tie it in with the permutation generator since I've been using next_permutation. Is there any resources that could help me with the basics of depth first searches or creating lexigraphic permutations in tree form?

    Read the article

  • Two seperate tm structs mirroring each other

    - by BSchlinker
    Here is my current situation: I have two tm structs, both set to the current time I make a change to the hour in one of the structs The change is occurring in the other struct magically.... How do I prevent this from occurring? I need to be able to compare and know the number of seconds between two different times -- the current time and a time in the future. I've been using difftime and mktime to determine this. I recognize that I don't technically need two tm structs (the other struct could just be a time_t loaded with raw time) but I'm still interested in understanding why this occurs. void Tracker::monitor(char* buffer){ // time handling time_t systemtime, scheduletime, currenttime; struct tm * dispatchtime; struct tm * uiuctime; double remainingtime; // let's get two structs operating with current time dispatchtime = dispatchtime_tm(); uiuctime = uiuctime_tm(); // set the scheduled parameters dispatchtime->tm_hour = 5; dispatchtime->tm_min = 05; dispatchtime->tm_sec = 14; uiuctime->tm_hour = 0; // both of these will now print the same time! (0:05:14) // what's linking them?? // print the scheduled time printf ("Current Time : %2d:%02d:%02d\n", uiuctime->tm_hour, uiuctime->tm_min, uiuctime->tm_sec); printf ("Scheduled Time : %2d:%02d:%02d\n", dispatchtime->tm_hour, dispatchtime->tm_min, dispatchtime->tm_sec); } struct tm* Tracker::uiuctime_tm(){ time_t uiucTime; struct tm *ts_uiuc; // give currentTime the current time time(&uiucTime); // change the time zone to UIUC putenv("TZ=CST6CDT"); tzset(); // get the localtime for the tz selected ts_uiuc = localtime(&uiucTime); // set back the current timezone unsetenv("TZ"); tzset(); // set back our results return ts_uiuc; } struct tm* Tracker::dispatchtime_tm(){ time_t currentTime; struct tm *ts_dispatch; // give currentTime the current time time(&currentTime); // get the localtime for the tz selected ts_dispatch = localtime(&currentTime); // set back our results return ts_dispatch; }

    Read the article

  • VMware ESXi - varying CPU time (CPU reservation)

    - by Tomo
    Hello! I'm running FreeBSD 7.2 under VMware ESXi 3.5. Host has 2 physical CPUs and the BSD box is currently the only running VM. Only one virtual CPU is assigned to the VM. When measuring CPU time of a specific program, I get very different results from time to time. Processor usage is reported differently by VMware, based on the system load. Is it possible to assign a constant share of a physical CPU to specific VM? I would like the CPU time to be more or less much constant. I tried setting CPU reservation when configuring VM in the VMware Infrastructure Client, but the CPU time still varies a lot. Thanks in advance!

    Read the article

  • Different boot time for the same computer by different commands

    - by andrej
    As far as I am aware, there are 3 ways to check the computer boot time in windows. And they should give the same time, just in different formats. Why do I get different times, where do these commands get their time? wmic os get lastBootUpTime | find "+120" 20140823002317.596695+120 systeminfo | find /i "boot time" System Boot Time: 23.8.2014, 0:23:17 net statistics server | find /i "statistics since" Statistics since 22.8.2014 18:21:30 The first two are the same (0:23), but the third is different (18:21), and also accurate. Why? At boot, all tree show the same, but at some point, they change. I am using windows 7 ultimate, 64bit.

    Read the article

  • Time Capsule Refuses to Connect After Interrupted File Transfer

    - by Steve Stifler
    I have a first generation Apple Time Capsule set up as a NAS on my network. I use the built-in hard drive for Time Machine backups, with a 1.5 TB external HDD attached to the Time Capsule over USB. Whenever I cancel a file transfer from the external drive to my Macbook (or quit a video I had been streaming from it, as was the most recent incident), I can no longer connect to the Time Capsule or the attached drive, and I have to unplug it to get it working again. How can I fix this? Is it a network error? Could the Time Capsule be at fault or could it be my computer?

    Read the article

  • I get "An error occurred while Windows was synchronizing with [name of time server]." when trying t

    - by ChrisF
    Prompted by the answers to this question I decided to give the Windows built in time synchronisation another go. However, no matter what time server I use I get this error: "An error occurred while Windows was synchronizing with [name of time server]." The help suggests the following as reasons for failure: You are not connected to the Internet. Establish an Internet connection before you attempt to synchronize your clock. Your personal or network firewall prevents clock synchronization. Most corporate and organizational firewalls will block time synchronization, as do some personal firewalls. Home users should read the firewall documentation for information about unblocking network time protocol (NTP). You should be able to synchronize your clock if you switch to Windows Firewall. The Internet time server is too busy or is temporarily unavailable. If this is the case, try synchronizing your clock later, or update it manually by double-clicking the clock on the taskbar. You can also try using a different time server. The time shown on your computer is too different from the current time on the Internet time server. Internet time servers might not synchronize your clock if your computer's time is off by more than 15 hours. To synchronize the time properly, ensure that the date and time settings are set close to your current time in the Date and Time Properties in Control Panel. Now the first reason is clearly wrong - I am connected to the internet. I can see the 2nd being the most likely cause. I have Sygate Personal Firewall running, but it normally asks if something it trying to connect for the first time. Does anyone know I can unblock the NTP protocol - or at least check if it is blocked?. I don't think it's #3 or #4 as I've tried a number of different servers including the one currently used by Atomic Clock Sync. Though if someone knows the address of a UK time server I can double check this.

    Read the article

  • Backup server (OSX) like time machine to backup remote ubuntu 12.04 server [on hold]

    - by Mad
    I've searched my ass of for an good solution to backup my ubuntu server thats in a datacenter. Local we have an osx server with some external drives attached to it. This is for the local working stations that handle timemachine. What i like to do is fetch the files (or mount the root of my ubuntu server) and make an time machine backup from it. I just have one problem that if my osx server crashes i can't put back the system because it contains not only the osx server but also the ubuntu server from the data center. I've used Back in time on ubuntu to do the exact same thing but this was to Ubuntu (local) from Ubuntu (datacenter). So does anybody has an solution? Here are my requirements: Set time intervals for backups; need to be backed up nightly. Set time intervals for keeping backups; hourly, weekly, monthy etc Able to back up all computers and servers from an offsite location the local osx server (10.9). Manageable from that one location to login with ssh to do rsync or rsnapshot Has a GUI (osx) Act like time machine, backup only the files that has been changed. Restore to a point back in time.

    Read the article

  • Can't log in after restoring from Time Machine

    - by Jay Conrod
    My friend uses a Macbook Pro with Snow Leopard 10.6.2. She uses both FileVault and Time Machine to preserve her data. Recently, she suffered a hard disk failure. After restoring from Time Machine using the Snow Leopard install disk, she gets the following error when logging in: You are unable to log in to the FileVault user account at this time. Logging into the account failed because an error occurred. When examining the file system through Terminal, I noticed her home directory is not present: there is no /Users/username directory, or the FileVault .sparsebundle file that's supposed to be there. When using Time Machine.app on /Users, it appears as if her home directory as never there. Additionally, I did a search on the backup disk with the following command: sudo find /Volumes/backup -name '*.sparsebundle' No results. She told me that after working with some large data files, Time Machine would come on, and it would sound like it was transferring a lot of data to the hard disk. Time Machine must have been doing something, right? How can we recover her files? Are they still there?

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • Can I take up another part-time job when working with a typical IT company in India? [closed]

    - by learnerforever
    Hi, I know that this kind of question might depend from policies of company to company, but how does it look like in a typical IT company in India? Can I take up another part-time job when working full time in a typical private IT company in India? Is there any indian employement law preventing it(for whatever reason)? This part-time job could be a job on weekends or some online part-time freelancing programming job, which I manage to do on weekends or on weekdays after office hours. Thanks,

    Read the article

  • How to change inode change time of a file?

    - by Emerald214
    I tried to use touch -d "2011-09-15 16:50" test.txt but it just modify last access time and last modified time. Access: 2011-09-15 16:50:00.000000000 +0700 Modify: 2011-09-15 16:50:00.000000000 +0700 Change: 2011-11-15 16:56:55.620124149 +0700 How to change the last change time? I want to do this because my crontab use filectime($file) to get the last changed time, so I need to create a file of two months ago to test something.

    Read the article

  • Best and easiest algorithm to search for a vertex on a Graph?

    - by Nazgulled
    Hi, After implementing most of the common and needed functions for my Graph implementation, I realized that a couple of functions (remove vertex, search vertex and get vertex) don't have the "best" implementation. I'm using adjacency lists with linked lists for my Graph implementation and I was searching one vertex after the other until it finds the one I want. Like I said, I realized I was not using the "best" implementation. I can have 10000 vertices and need to search for the last one, but that vertex could have a link to the first one, which would speed up things considerably. But that's just an hypothetical case, it may or may not happen. So, what algorithm do you recommend for search lookup? Our teachers talked about Breadth-first and Depth-first mostly (and Dikjstra' algorithm, but that's a completely different subject). Between those two, which one do you recommend? It would be perfect if I could implement both but I don't have time for that, I need to pick up one and implement it has the first phase deadline is approaching... My guess, is to go with Depth-first, seems easier to implement and looking at the way they work, it seems a best bet. But that really depends on the input. But what do you guys suggest?

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >