Search Results

Search found 4584 results on 184 pages for 'michael sh'.

Page 47/184 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Prevent master to fall back to master after failure

    - by Chrille
    I'm using keepalived to setup a virtual ip that points to a master server. When a failover happens it should point the virtual ip to the backup, and the IP should stay there until I manually enable (fix) the master. The reason this is important is that I'm running mysql replication on the servers and writes should only be on the master. When I failover I promote the slave to master. The master server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP1 } vrrp_instance APP1 { interface eth0 state EQUAL virtual_router_id 61 priority 999 nopreempt virtual_ipaddress { 217.x.x.129 } smtp_alert } Backup server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP2 } vrrp_instance APP2 { interface eth0 state EQUAL virtual_router_id 61 priority 100 virtual_ipaddress { 217.xx.xx.129 } notify_master "/etc/keepalived/notify.sh del app2" notify_backup "/etc/keepalived/notify.sh add app2" notify_fault "/etc/keepalived/notify.sh add app2” smtp_alert }

    Read the article

  • Jenkins shell command isn't executing

    - by Dmitro
    In Jenkins project I add command for executiong rm /var/www/ru.liveyurist.ru/tmp/* But when I build project I get error: Started by user anonymous Building in workspace /var/www/ru.myproject.ru Updating https://subversion.assembla.com/svn/myproject/trunk At revision 1168 no change for https://subversion.assembla.com/svn/liveexpert/trunk since the previous build [ru.myproject.ru] $ /bin/sh -xe /tmp/hudson7189633355149866134.sh FATAL: command execution failed java.io.IOException: Cannot run program "/bin/sh" (in directory "/var/www/ru.myproject.ru"): java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at hudson.Proc$LocalProc.<init>(Proc.java:244) at hudson.Proc$LocalProc.<init>(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:709) at hudson.Launcher$ProcStarter.start(Launcher.java:338) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 16 more Build step 'Execute shell' marked build as failure Finished: FAILURE I started Jenkins from root user. Please advise what can be reason of this error?

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • Start & shutdown as tomcat as non-root user

    - by user53864
    How to start, shutdown and restart tomcat as non-root user. I have tomcat5.x installed on ubuntu machine(usr/share/tomcat). The tomcat directory has full permissions. When I shutdown(/usr/share/tomcat/bin/shudown.sh) or startup(/usr/share/tomcat/startup.sh) tomcat as normal user, it starts & shuts down normally as if it is executed as root user but I could not access webpage even after starting tomcat as non-root user. user1@station2:/usr/share/tomcat/webapps$ ../bin/shutdown.sh Using CATALINA_BASE: /usr/share/tomcat Using CATALINA_HOME: /usr/share/tomcat Using CATALINA_TMPDIR: /usr/share/tomcat/temp Using JRE_HOME: /usr 4 Oct, 2010 6:41:11 PM org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:310) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:176) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:163) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) at java.net.Socket.connect(Socket.java:546) at java.net.Socket.connect(Socket.java:495) at java.net.Socket.<init>(Socket.java:392) at java.net.Socket.<init>(Socket.java:206) at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:421) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:337) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:415)

    Read the article

  • Startup script for Red5 on Ubuntu 9.04

    - by user49249
    I am creating startup script for Red5 on Ubuntu. Red5 is installed in /opt/red5 Following is a working script on a CentOS Box on which Red5 is running [code] ==========Start init script ========== #!/bin/sh PROG=red5 RED5_HOME=/opt/red5/dist DAEMON=$RED5_HOME/$PROG.sh PIDFILE=/var/run/$PROG.pid # Source function library . /etc/rc.d/init.d/functions [ -r /etc/sysconfig/red5 ] && . /etc/sysconfig/red5 RETVAL=0 case "$1" in start) echo -n $"Starting $PROG: " cd $RED5_HOME $DAEMON >/dev/null 2>/dev/null & RETVAL=$? if [ $RETVAL -eq 0 ]; then echo $! > $PIDFILE touch /var/lock/subsys/$PROG fi [ $RETVAL -eq 0 ] && success $"$PROG startup" || failure $"$PROG startup" echo ;; stop) echo -n $"Shutting down $PROG: " killproc -p $PIDFILE RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$PROG ;; restart) $0 stop $0 start ;; status) status $PROG -p $PIDFILE RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL [/code] What do I need to replace for Ubuntu in the above script. My Red5 is in /opt/red5/ and to start it manually I always do /opt/red5/dist/red5.sh from Ubuntu As I did not find rc.d/functions on Ubuntu on my laptop also /etc/init.d/functions I did not existed. I would like to be able to use them with service as Red hat distributions do. I checked /lib/lsb/init-functions.

    Read the article

  • Route all wlan0 traffic over tun0

    - by Tuinslak
    I'm looking for a way to route all wlan0 traffic (tcp and udp) over tun0 (openvpn). However, all other traffic originating from the device itself should not be routed through tun0. I'm guessing this could be realized using iptables or route, but none of my options seem to work. # route add -net 0.0.0.0 gw 172.27.0.1 dev wlan0 SIOCADDRT: No such process Info: This is because the VPN server is not redundant, and wlan users are not really important. However, all services running on the device are fairly important and having a VPN virtual machine with no SLA on it is just a bad idea. Trying to minimize the odds of something going wrong. So setting the VPN server as default gateway is not really an option. I also want all wlan0 user to use the VPN server's IP address as external IP. Edit with the script provided: root@ft-genesi-xxx ~ # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.27.0.17 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.13.37.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0 172.27.0.0 172.27.0.17 255.255.192.0 UG 0 0 0 tun0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 root@ft-genesi-xxx ~ # ./test.sh RTNETLINK answers: No such process root@ft-genesi-xxx ~ # cat test.sh #!/bin/sh IP=/sbin/ip # replace with the range of your wlan network, or use fwmark instead ${IP} rule add from 10.13.37.0/24 table from-wlan ${IP} route add default dev tun0 via 127.72.0.1 table from-wlan ${IP} route add 10.13.37.0/24 dev wlan0 table from-wlan

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • Slow manipulation of netfilter rules

    - by Ole Martin Eide
    I have a script maintaining gre tunnels and firewall rules using the "ip" and "iptables" tools. Setting up hundreds of tunnels, and adresses per interface runs just fine. Takes less than 0.1 second per interface, however when I get around to do the firewall rules everything slows down spending 0.5 per insertion. Why is it running so slow? What can I do to improve the speed? It seems like I could try ipset instead, but I really feel there is something wrong with the kernel or something. The interesting thing is that the first 10 rules runs fast, then it slows down.. mybox(root) foo# iptables -V iptables v1.3.5 mybox(root) foo# uname -a Linux foo 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux mybox(root) foo# cat test.sh #!/bin/sh for n in {1..100} do /sbin/iptables -A OUTPUT -s ${n} -j ACCEPT /sbin/iptables -D OUTPUT -s ${n} -j ACCEPT done mybox(root) foo# time ./test.sh real 1m38.839s user 0m0.100s sys 1m38.724s Appriciate any help. Cheers!

    Read the article

  • How to make quicksilver remember custom trigger

    - by corroded
    I am trying to make a custom trigger for my shell/apple script file to run so I can just launch my dev environment at the push of a button. So basically: I have a shell script(and some apple script included) in ~ named start_server.sh which does 3 things: start up solr server start up memcached start up script/server I have a saved quicksilver command(.qs) that opens up start_server.sh(so start_server.sh, then the action is "Run in Terminal") I created a custom trigger that calls this saved qs command. I did that then tested it and it works. I then tried to double check it so I quit quicksilver and when I checked the triggers it just said: "Open (null)" as the action. I set the trigger again and when i restarted QS the same thing happened again. I don't know why but my old custom trigger to open terminal has worked since forever so why doesn't this one work? Here's a screenie of the triggers after I restart QS: http://grab.by/4XWW If you have any other suggestion on how to make a "push button" start for my server then please do so :) Thanks! As an added note, I have already tried the steps on this thread but to no avail: http://groups.google.com/group/blacktree-quicksilver/browse_thread/thread/7b65ecf6625f8989

    Read the article

  • Facter - custom fact, returns empty data set when invoked by Puppet agent

    - by user3684494
    According to this puppet labs article, I can create custom facts from shell scripts. I have created a bash script that returns a single fact, it is packaged in a modules facts.d directory. The module is included on the target system via an ENC class. When invoked by the puppet agent on the target it returns an empty set, when run by hand on the agent it correctly returns the fact. The script has execute permission on the master, but does not have it on the agent. I saw a bug report related to permissions and file types, but that was windows and supposed to be fixed in puppet version 3. What am I doing wrong? ENC definition: --- classes: facttest: Shell script: #!/bin/bash echo "test_fact1=$(hostname)" Permissions: master: -rwxr-xr-x 1 root root ... modules/facttest/facts.d/testfact.sh agent: -rw-r--r-- 1 root root ... /var/lib/puppet/facts.d/testfact.sh Agent message: Fact file /var/lib/puppet/facts.d/testfact.sh was parsed but returned an empty data set Version information: Puppet master: 3.5.1 (Debian) Facter master: 2.0.1 Puppet agent: 3.6.1 (OpenSUSE) Facter agent: 2.0.1

    Read the article

  • SSH & SFTP: Should I assign one port to each user to facilitate bandwidth monitoring?

    - by BertS
    There is no easy way to track real-time per-user bandwidth usage for SSH and SFTP. I think assigning one port to each user may help. Idea of implementation Use case Bob, with UID 1001, shall connect on port 31001. Alice, with UID 1002, shall connect on port 31002. John, with UID 1003, shall connect on port 31003. (I do not want to lauch several sshd instances as proposed in question 247291.) 1. Setup for SFTP: In /etc/ssh/sshd_config: Port 31001 Port 31002 Port 31003 Subsystem sftp /usr/bin/sftp-wrapper.sh The file sftp-wrapper.sh starts the sftp server only if the port is the correct one: #!/bin/sh mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -eq $current_port ] then exec /usr/lib/openssh/sftp-server fi 2. Additional setup for SSH: A few lines in /etc/profile prevents the user from connecting on the wrong port: if [ -n "$SSH_CONNECTION" ] then mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -ne $current_port ] then echo "Please connect on port $mandatory_port." exit 1 fi fi Benefits Now it should be easy to monitor per-user bandwidth usage. A Rrdtool-based application could produce charts like this: I know this won't be a perfect calculation of the bandwidth usage: for example, if somebody launches a bruteforce attack on port 31001, there will be a lot of traffic on this port although not from Bob. But this is not a problem to me: I do not need an exact computation of per-user bandwidth usage, but an indicator that is approximately correct in standard situations. Questions Is the idea of assigning one port for each user is a good one? Is the proposed setup an reliable one? If I have to open dozens of ports for many users, should I expect a performance drawback? Do you know a rrdtool-based application which could make the chart above?

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Planning an Event&ndash;SPS NYC

    - by MOSSLover
    I bet some of you were wondering why I am not going to any events for the most part in June and July (aside from volunteering at SPS Chicago).  Well I basically have no life for the next 2 months.  We are approaching the 11th hour of SharePoint Saturday New York City.  The event is slated to have 350 to 400 attendees.  This is second year in a row I’ve helped run it with Jason Gallicchio.  It’s amazingly crazy how much effort this event requires versus Kansas City.  It’s literally 2x the volume of attendees, speakers, and sponsors plus don’t even get me started about volunteers.  So here is a bit of the break down… We have 30 volunteers+ that Tasha Scott from the Hampton Roads Area will be managing the day of the event to do things like timing the speakers, handing out food, making sure people don’t walk into the event that did not sign up until we get a count for fire code, registering people, watching the sharpees, watching the prizes, making sure attendees get to the right place,  opening and closing the partition in the big room, moving chairs, moving furniture, etc…Then there is Jason, Greg, and I who will be making sure that the speakers, sponsors, and everything is going smoothly in the background.  We need to make sure that everything is setup properly and in the right spot.  We also need to make sure signs are printed, schedules are created, bags are stuffed with sponsor material.  Plus we need to send out emails to sponsors reminding them to send us the right information to post on the site for charity sessions, send us boxes with material to stuff bags, and we need to make sure that Michael Lotter gets there information for invoicing.  We also need to check that Michael has invoiced everyone and who has paid, because we can’t order anything for the event without money.  Once people have paid we need to setup food orders, speaker and volunteer dinners, buy prizes, buy bags, buy speakers/volunteer/organizer shirts, etc…During this process we need all the abstracts from speakers, all the bios, pictures, shirt sizes, and other items so we can create schedules and order items.  We also need to keep track of who is attending the dinner the night before for volunteers and speakers and make sure we don’t hit capacity.  Then there is attendee tracking and making sure that we don’t hit too many attendees.  We need to make sure that attendees know where to go and what to do.  We have to make all kinds of random supply lists during this process and keep on track with a variety of lists and emails plus conference calls.  All in all it’s a lot of work and I am trying to keep track of it all the top so that we don’t duplicate anything or miss anything.  So basically all in all if you don’t see me around for another month don’t worry I do exist. Right now if you look at what I’m doing I am traveling every Monday morning and Thursday night back and forth to Washington DC from New Jersey.  Every night I am working on organizational stuff for SharePoint Saturday New York City.  Every Tuesday night we are running an event conference call.  Every weekend I am either with family or my boyfriend and cat trying hard not to touch the event.  So all my time is pretty much work, event, and family/boyfriend.  I have 0 bandwidth for anything in the community.  If you compound that with my severe allergy problems in DC and a doctor’s appointment every month plus a new med once a week I’m lucky I am still standing and walking.  So basically once July 30th hits hopefully Jason Gallicchio, Greg Hurlman, and myself will be able to breathe a little easier.  If I forget to do this thank you Greg and Jason.  Thank you Tom Daly.  Thank you Michael Lotter.  Thank you Tasha Scott.  Thank you Kevin Griffin.  Thank you all the volunteers, speakers, sponsors, and attendees who will and have made this event a success.  Hopefully, we have enough time until next year to regroup, recharge, and make the event grow bigger in a different venue.  Awesome job everyone we sole out within 3 days of registration and we still have several weeks to go.  Right now the waitlist is at 49 people with room to grow.  If you attend the event thank all these guys I mentioned above for making it possible.  It’s going to be awesome I know it but I probably won’t remember half of it due to the blur of things that we will all be taking care of the day of the event.  Catch you all in the end of July/Early August where I will attempt to post something useful and clever and possibly while wearing a fez. Technorati Tags: SPS NYC,SharePoint Saturday,SharePoint Saturday New York City

    Read the article

  • Utility that helps in file locking - expert tips wanted

    - by maix
    I've written a subclass of file that a) provides methods to conveniently lock it (using fcntl, so it only supports unix, which is however OK for me atm) and b) when reading or writing asserts that the file is appropriately locked. Now I'm not an expert at such stuff (I've just read one paper [de] about it) and would appreciate some feedback: Is it secure, are there race conditions, are there other things that could be done better … Here is the code: from fcntl import flock, LOCK_EX, LOCK_SH, LOCK_UN, LOCK_NB class LockedFile(file): """ A wrapper around `file` providing locking. Requires a shared lock to read and a exclusive lock to write. Main differences: * Additional methods: lock_ex, lock_sh, unlock * Refuse to read when not locked, refuse to write when not locked exclusivly. * mode cannot be `w` since then the file would be truncated before it could be locked. You have to lock the file yourself, it won't be done for you implicitly. Only you know what lock you need. Example usage:: def get_config(): f = LockedFile(CONFIG_FILENAME, 'r') f.lock_sh() config = parse_ini(f.read()) f.close() def set_config(key, value): f = LockedFile(CONFIG_FILENAME, 'r+') f.lock_ex() config = parse_ini(f.read()) config[key] = value f.truncate() f.write(make_ini(config)) f.close() """ def __init__(self, name, mode='r', *args, **kwargs): if 'w' in mode: raise ValueError('Cannot open file in `w` mode') super(LockedFile, self).__init__(name, mode, *args, **kwargs) self.locked = None def lock_sh(self, **kwargs): """ Acquire a shared lock on the file. If the file is already locked exclusively, do nothing. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ if self.locked == 'ex': return # would implicitly remove the exclusive lock return self._lock(LOCK_SH, **kwargs) def lock_ex(self, **kwargs): """ Acquire an exclusive lock on the file. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ return self._lock(LOCK_EX, **kwargs) def unlock(self): """ Release all locks on the file. Flushes if there was an exclusive lock. :returns: Lock status from before the call (one of 'sh', 'ex', None). """ if self.locked == 'ex': self.flush() return self._lock(LOCK_UN) def _lock(self, mode, nonblocking=False): flock(self, mode | bool(nonblocking) * LOCK_NB) before = self.locked self.locked = {LOCK_SH: 'sh', LOCK_EX: 'ex', LOCK_UN: None}[mode] return before def _assert_read_lock(self): assert self.locked, "File is not locked" def _assert_write_lock(self): assert self.locked == 'ex', "File is not locked exclusively" def read(self, *args): self._assert_read_lock() return super(LockedFile, self).read(*args) def readline(self, *args): self._assert_read_lock() return super(LockedFile, self).readline(*args) def readlines(self, *args): self._assert_read_lock() return super(LockedFile, self).readlines(*args) def xreadlines(self, *args): self._assert_read_lock() return super(LockedFile, self).xreadlines(*args) def __iter__(self): self._assert_read_lock() return super(LockedFile, self).__iter__() def next(self): self._assert_read_lock() return super(LockedFile, self).next() def write(self, *args): self._assert_write_lock() return super(LockedFile, self).write(*args) def writelines(self, *args): self._assert_write_lock() return super(LockedFile, self).writelines(*args) def flush(self): self._assert_write_lock() return super(LockedFile, self).flush() def truncate(self, *args): self._assert_write_lock() return super(LockedFile, self).truncate(*args) def close(self): self.unlock() return super(LockedFile, self).close() (the example in the docstring is also my current use case for this) Thanks for having read until down here, and possibly even answering :)

    Read the article

  • Cucumber bundle not working in Textmate

    - by Gerhard
    I set up Aslak's Cucumber.tmbundle in Texmate, set up PATH in Shell Variables to /Users/gerhard/.rvm/bin:/Users/gerhard/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin but when I run any of the bundle commands I get this: env: sh\r: No such file or directory /usr/bin/env sh is in the search path, so what am I missing?

    Read the article

  • storing passed arguments in separate variables -shell scripting

    - by Nathan Pk
    In my script "script.sh" , I want to store 1st and 2nd argument to some variable and rest to another separate variable. What command I must use to implement this task? Number of arguments that is passed to a script is random) When I run the command in console ./script.sh abc def ghi jkl mn o p qrs xxx #It can have any number of arguments In this case, I want my script to store "abc" and "def" in one variable. "ghi jkl mn o p qrs xxx" should be stored in another variable.

    Read the article

  • Run script before compilation in Makefile

    - by Werner
    Hi, in a Makefile, I have: all: $(PROG) $(PROG): $(CPP_OBJS) $(CPP_LD) $(CPP_LD_FLAGS) $(CPP_OBJS) -o $(PROG) I would like to add some script before the compilation so I tried: all: $(PROG) $(PROG): $(CPP_OBJS) sh script.sh ; $(CPP_LD) $(CPP_LD_FLAGS) $(CPP_OBJS) -o $(PROG) but it does not work. What is the right way of running a script in this case before the compilation? Thanks

    Read the article

  • Source operator doesn't work in if construction in bash

    - by Igor
    Hello, I'd like to include a config file in my bash script with 2 conditions: 1) the config file name is constructed on-the-fly and stored in variable, and 2) in case if config file doesn't exist, the script should fail: config.cfg: CONFIGURED=yes test.sh: #!/bin/sh $CFG=config.cfg echo Source command doesn't work here: [ -f $CFG ] && ( source $CFG ) || (echo $CFG doesnt exist; exit 127) echo $CONFIGURED echo ... but works here: source $CFG echo $CONFIGURED What's wrong in [...] statement?

    Read the article

  • Java application installer for linux

    - by rusiruboteju
    How can I create a linux installer for java desktop application? for an instance if we want to install netbeans on ubuntu there is a download which is named as "netbeans-6.8-ml-java-linux.sh" so how can i create "mydesktopapp-linux.sh" i have the properly working .jar file i want to distribute my java desktop app. Can anyone help me?

    Read the article

  • Command passed as argument to shell script

    - by raj_arni
    Hi, I want to pass a command to a shell script. This command is a grep command. While executing I am getting the following errors, please help: myscript.sh "egrep 'ERROR|FATAL' \*20100428\*.log | grep -v aString" myscript.sh is a simple script: #!/bin/ksh cd log $1 the errors are: egrep: can't open | egrep: can't open grep egrep: can't open -v egrep: can't open aString Error is because egrap sees |, grep, -v and aString as arguments.

    Read the article

  • Linux Directory Access Problem: Permission Denied """In Root"""

    - by RBA
    Hi, When login through root on HP-tru Unix server, I am trying to access a directory, it is saying "Permission Denied".. Also, an sh file is also not able to execute through same root access.. I have checked the permission of the directory as well as for sh file through ls-ltr.. It is also fine.. Root System rwx-rwx-rwx-- What could be the possible cause, and how to correct it.. Thanks.

    Read the article

  • unix shell, redirect output but keep on stdin

    - by Mike
    I'd like to have a shell script redirect stdout of a child process in the following manner Redirect stdout to a file Display the output of the process in real time I know I could do something like #!/bin/sh ./child > file cat file But that would not display stdout in real time. For instance, if the child was #!/bin/sh echo 1 sleep 1 echo 2 The user would see "1" and "2" printed at the same time

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >