Search Results

Search found 5414 results on 217 pages for 'otn j master'.

Page 138/217 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • Maintenance plans love story

    - by Maria Zakourdaev
    There are about 200 QA and DEV SQL Servers out there.  There is a maintenance plan on many of them that performs a backup of all databases and removes the backup history files. First of all, I must admit that I’m no big fan of maintenance plans in particular or the SSIS packages in general.  In this specific case, if I ever need to change anything in the way backup is performed, such as the compression feature or perform some other change, I have to open each plan one by one. This is quite a pain. Therefore, I have decided to replace the maintenance plans with a stored procedure that will perform exactly the same thing.  Having such a procedure will allow me to open multiple server connections and just execute an ALTER PROCEDURE whenever I need to change anything in it. There is nothing like good ole T-SQL. The first challenge was to remove the unneeded maintenance plans. Of course, I didn’t want to do it server by server.  I found the procedure msdb.dbo.sp_maintplan_delete_plan, but it only has a parameter for the maintenance plan id and it has no other parameters, like plan name, which would have been much more useful. Now I needed to find the table that holds all maintenance plans on the server. You would think that it would be msdb.dbo.sysdbmaintplans but, unfortunately, regardless of the number of maintenance plans on the instance, it contains just one row.    After a while I found another table: msdb.dbo.sysmaintplan_subplans. It contains the plan id that I was looking for, in the plan_id column and well as the agent’s job id which is executing the plan’s package: That was all I needed and the rest turned out to be quite easy.  Here is a script that can be executed against hundreds of servers from a multi-server query window to drop the specific maintenance plans. DECLARE @PlanID uniqueidentifier   SELECT @PlanID = plan_id FROM msdb.dbo.sysmaintplan_subplans Where name like ‘BackupPlan%’   EXECUTE msdb.dbo.sp_maintplan_delete_plan @plan_id=@PlanID   The second step was to create a procedure that will perform  all of the old maintenance plan tasks: create a folder for each database, backup all databases on the server and clean up the old files. The script is below. Enjoy.   ALTER PROCEDURE BackupAllDatabases                                   @PrintMode BIT = 1 AS BEGIN          DECLARE @BackupLocation VARCHAR(500)        DECLARE @PurgeAferDays INT        DECLARE @PurgingDate VARCHAR(30)        DECLARE @SQLCmd  VARCHAR(MAX)        DECLARE @FileName  VARCHAR(100)               SET @PurgeAferDays = -14        SET @BackupLocation = '\\central_storage_servername\BACKUPS\'+@@servername               SET @PurgingDate = CONVERT(VARCHAR(19), DATEADD (dd,@PurgeAferDays,GETDATE()),126)               SET @FileName = '?_full_'+                      + REPLACE(CONVERT(VARCHAR(19), GETDATE(),126),':','-')                      +'.bak';          SET @SQLCmd = '               IF ''?'' <> ''tempdb'' BEGIN                      EXECUTE master.dbo.xp_create_subdir N'''+@BackupLocation+'\?\'' ;                        BACKUP DATABASE ? TO  DISK = N'''+@BackupLocation+'\?\'+@FileName+'''                      WITH NOFORMAT, NOINIT,  SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10 ;                        EXECUTE master.dbo.xp_delete_file 0,N'''+@BackupLocation+'\?\'',N''bak'',N'''+@PurgingDate+''',1;               END'          IF @PrintMode = 1 BEGIN               PRINT @SQLCmd        END               EXEC sp_MSforeachdb @SQLCmd        END

    Read the article

  • Could not start ZK at requested port of 2181, while export HBASE_MANAGES_ZK=false

    - by utrecht
    Problem The first aim was to run HBase standalone. Navigating to ip:60010/master-status is succesfull once HBase has been started. The second aim is to run a distinct ZooKeeper quorum. ZooKeeper has been downloaded and has been started: netstat -nato | grep 2181 tcp 0 0 :::2181 :::* LISTEN off (0.00/0/0) The conf/hbase-env.sh was changed as follows: # Tell HBase whether it should manage it's own instance of Zookeeper or not. export HBASE_MANAGES_ZK=false in order to avoid HBase starts ZooKeeper once HBase has been started. However, the following error occurs once HBase has been started. Could not start ZK at requested port of 2181. ZK was started at port: 2182. Aborting as clients (e.g. shell) will not be able to find this ZK quorum. Question How to disable the startup of ZooKeeper by HBase and run ZooKeeper separately?

    Read the article

  • OpenNebula: [HostPoolInfo] User couldn't be authenticated, aborting call

    - by ulf
    I installed OpenNebula 3.2.1 following the guide found under http://opennebula.org/documentation:rel3.2:ignc on a Debian 6.0.4 machine. Everything seemed fine until trying to execute the command onevm list Then I always get this: oneadmin@opennebula-master:~$ onevm list [VirtualMachinePoolInfo] User couldn't be authenticated, aborting call. The file one_auth exists. I even gave the oneadmin user a password although it doesn't seem to be required according to the guide. I copied the password hash from /etc/shadow to the one_auth file. Still no success. Any ideas are appreciated.

    Read the article

  • setting up bind to work with nsupdate (SERVFAIL)

    - by funny_ha_ha
    I'm trying to update my DNS-Server dynamically using nsupdate. Prerequisite I'm using Debian 6 on my DNS-Server and Debian 4 on my client. I created a public/private key pair using: dnssec-keygen -C -a HMAC-MD5 -b 512 -n USER sub.example.com. I then edited my named.conf.local to contain my public key and the new zone i wish to update. It now looks like this (note: I also tried allow-update { any; }; without success): zone "example.com" { type master; file "/etc/bind/primary/example.com"; notify yes; allow-update { none; }; allow-query { any; }; }; zone "sub.example.com" { type master; file "/etc/bind/primary/sub.example.com"; notify yes; allow-update { key "sub.example.com."; }; allow-query { any; }; }; key sub.example.com. { algorithm HMAC-MD5; secret "xxxx xxxx"; }; Next, I copied the private key file (key.private) to another server I want to update the zone from. I also created a textfile (update) on this server which contained the update information (note: I tried toying around with this stuff too. no success): server example.com zone sub.example.com update add sub.example.com. 86400 A 10.10.10.1 show send Now I'm trying to update the zone using: nsupdate -k key.private -v update The Problem Said command gives me the following output: Outgoing update query: ;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0 ;; flags: ; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0 ;; ZONE SECTION: ;sub.example.com. IN SOA ;; UPDATE SECTION: sub.example.com. 86400 IN A 10.10.10.1 update failed: SERVFAIL named debug Level 3 gives me the following information when I issue the nsupdate command on the remote server (note: I obfuscated the client IP): 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: new TCP connection 06-Aug-2012 14:51:33.977 client X.X.X.X#33182: replace 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: createclients 06-Aug-2012 14:51:33.978 clientmgr @0x2ada3c7ee760: recycle 06-Aug-2012 14:51:33.978 client @0x2ada475f1120: accept 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: TCP request 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: request has valid signature 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: recursion not available 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: update 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: send 06-Aug-2012 14:51:33.978 client X.X.X.X#33182: sendto 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: senddone 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.979 client X.X.X.X#33182: read 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: next 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: request failed: end of file 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: endrequest 06-Aug-2012 14:51:33.986 client X.X.X.X#33182: closetcp But it doesn't do anything. The zone isn't updated, nor does my nsupdate change anything. I'm not sure if the file /etc/bind/primary/sub.example.com should exist prior to the first update or not. I tried it without the file, with an empty file and with a pre-configured zone file. Without success. The sparse information I found on the net pointed me towards file and folder permissions regarding the bind working directory, so I changed the permissions of both /etc/bind and /var/cache/bind (which is the home dir of my "bind" user). I'm not a 100% sure if the permissions are correct.. but it looks good to me: ls -lah /var/cache/bind/ total 224K drwxrwxr-x 2 bind bind 4.0K Aug 6 03:13 . drwxr-xr-x 12 root root 4.0K Jul 21 11:27 .. -rw-r--r-- 1 bind bind 211K Aug 6 03:21 named.run ls -lah /etc/bind/ total 72K drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 . drwxr-xr-x 87 root root 4.0K Jul 30 01:24 .. -rw------- 1 bind bind 125 Aug 6 02:54 key.public -rw------- 1 bind bind 156 Aug 6 02:54 key.private -rw-r--r-- 1 bind bind 2.5K Aug 6 03:07 bind.keys -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.0 -rw-r--r-- 1 bind bind 271 Aug 6 03:07 db.127 -rw-r--r-- 1 bind bind 237 Aug 6 03:07 db.255 -rw-r--r-- 1 bind bind 353 Aug 6 03:07 db.empty -rw-r--r-- 1 bind bind 270 Aug 6 03:07 db.local -rw-r--r-- 1 bind bind 3.0K Aug 6 03:07 db.root -rw-r--r-- 1 bind bind 493 Aug 6 03:32 named.conf -rw-r--r-- 1 bind bind 490 Aug 6 03:07 named.conf.default-zones -rw-r--r-- 1 bind bind 1.2K Aug 6 14:18 named.conf.local -rw-r--r-- 1 bind bind 666 Jul 29 22:51 named.conf.options drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 primary/ -rw-r----- 1 root bind 77 Mar 19 02:57 rndc.key -rw-r--r-- 1 bind bind 1.3K Aug 6 03:07 zones.rfc1918 ls -lah /etc/bind/primary/ total 20K drwxr-sr-x 2 bind bind 4.0K Aug 6 03:57 . drwxr-sr-x 3 bind bind 4.0K Aug 6 14:41 .. -rw-r--r-- 1 bind bind 356 Jul 30 00:45 example.com

    Read the article

  • Multi-level navigation controller on left-hand side of UISplitView with a small twist.

    - by user141146
    Hi. I'm trying make something similar to (but not exactly like) the email app found on the iPad. Specifically, I'd like to create a tab-based app, but each tab would present the user with a different UISplitView. Each UISplitView contains a Master and a Detail view (obviously). In each UISplitView I would like the Master to be a multi-level navigational controller where new UIViewControllers are pushed onto (or popped off of) the stack. This type of navigation within the UISplitView is where the application is similar to the native email app. To the best of my knowledge, the only place that has described a decent "splitviewcontroller inside of a uitabbarcontroller" is here: http://stackoverflow.com/questions/2475139/uisplitviewcontroller-in-a-tabbar-uitabbarcontroller and I've tried to follow the accepted answer. The accepted solution seems to work for me (i.e., I get a tab-bar controller that allows me to switch between different UISplitViews). The problem is that I don't know how to make the left-hand side of the UISplitView to be a multi-level navigation controller. Here is the code I used within my app delegate to create the initial "split view 'inside' of a tab bar controller" (it's pretty much as suggested in the aforementioned link). - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { NSMutableArray *tabArray = [NSMutableArray array]; NSMutableArray *array = [NSMutableArray array]; UISplitViewController *splitViewController = [[UISplitViewController alloc] init]; MainViewController *viewCont = [[MainViewController alloc] initWithNibName:@"MainViewController" bundle:nil]; [array addObject:viewCont]; [viewCont release]; viewCont = [[DetailViewController alloc] initWithNibName:@"DetailViewController" bundle:nil]; [array addObject:viewCont]; [viewCont release]; [splitViewController setViewControllers:array]; [tabArray addObject:splitViewController]; [splitViewController release]; array = [NSMutableArray array]; splitViewController = [[UISplitViewController alloc] init]; viewCont = [[Master2 alloc] initWithNibName:@"Master2" bundle:nil]; [array addObject:viewCont]; [viewCont release]; viewCont = [[Slave2 alloc] initWithNibName:@"Slave2" bundle:nil]; [array addObject:viewCont]; [viewCont release]; [splitViewController setViewControllers:array]; [tabArray addObject:splitViewController]; [splitViewController release]; // Add the tab bar controller's current view as a subview of the window [tabBarController setViewControllers:tabArray]; [window addSubview:tabBarController.view]; [window makeKeyAndVisible]; return YES; } the class MainViewController is a UIViewController that contains the following method: - (IBAction)push_me:(id)sender { M2 *m2 = [[[M2 alloc] initWithNibName:@"M2" bundle:nil] autorelease]; [self.navigationController pushViewController:m2 animated:YES]; } this method is attached (via interface builder) to a UIButton found within MainViewController.xib Obviously, the method above (push_me) is supposed to create a second UIViewController (called m2) and push m2 into view on the left-side of the split-view when the UIButton is pressed. And yet it does nothing when the button is pressed (even though I can tell that the method is called). Thoughts on where I'm going wrong? TIA!

    Read the article

  • /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID

    - by user1495181
    using ubuntu with net-snmp snmp work but in sys.log i see a lot of errors about snmpd.conf snmpd.conf: rwcommunity community 10.0.0.1 rwcommunity community 10.0.0.2 agentAddress udp:10.0.0.1:161 view systemonly included .1.3.6.1.2.1.1 view systemonly included .1.3.6.1.2.1.25.1 # Default access to basic system info rocommunity public default -V systemonly rouser authOnlyUser sysLocation Sitting on the Dock of the Bay sysContact Me <[email protected]> sysServices 72 proc mountd proc ntalkd 4 proc sendmail 10 1 disk / 10000 disk /var 5% includeAllDisks 10% load 12 10 5 trapsink localhost public iquerySecName internalUser rouser internalUser defaultMonitors yes linkUpDownNotifications yes master agentx errors: Sep 12 16:35:00 test snmpd[5485]: payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: prErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: prErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memErrorName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: memSwapErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: memSwapError Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: extOutput Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: extResult Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskPath Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: dskErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: dskErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laNames Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: laErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: laErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileName Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: fileErrorMsg Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: fileErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown payload OID Sep 12 16:35:00 test snmpd[5485]: Unknown payload OID: snmperrErrMessage Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: Unknown payload OID Sep 12 16:35:00 test snmpd[5485]: trigger OID: snmperrErrorFlag Sep 12 16:35:00 test snmpd[5485]: /usr/local/share/snmp/snmpd.conf: line 5: Error: unknown monitor OID Sep 12 16:35:00 test snmpd[5485]: Turning on AgentX master support. Sep 12 16:35:00 test snmpd[5485]: net-snmp: 33 error(s) in config file(s)

    Read the article

  • C# XmlSchemaSet - Resolving included schemas to enumerate attribute groups

    - by satixx
    Hi, I currently have a compiled XmlSchemaSet from which I get possible elements/attributes for each specific "parent" element defined in the schema. I have a single "master" xsd schema which includes another schema and uses attributeGroup references for some "common" elements. Here is a sample: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attributeGroup ref="BindingSourceAttributeGroup"/> </xs:complexType> </xs:element> </xs:schema> (SourceAttributeGroups.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="SourceAttributeGroups" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:attributeGroup id="BindingSourceAttributeGroup" name="BindingSourceAttributeGroup"> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </attributeGroup> </xs:schema> I would like to create an XmlSchemaSet in C# which would resolve, compile and "merge" every references of the MasterSchema so it would finaly look like this: (MasterSchema.xsd) <?xml version="1.0" encoding="utf-8"?> <xs:schema id="MasterSchema" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="CommonNamespace.xsd" xmlns="CommonNamespace.xsd" xmlns:mstns="CommonNamespace.xsd" elementFormDefault="qualified" > <xs:include schemaLocation="SourceAttributeGroups.xsd"/> <xs:element name="Binding" minOccurs="0" maxOccurs="2"> <xs:complexType> <xs:attribute name="Name" type="xs:string"/> <xs:attribute name="Source"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Data"/> </xs:restriction> </xs:simpleType> </xs:attribute> <!-- When Source is None --> <xs:attribute name="Value" type="xs:string"/> <!-- Label --> <xs:attribute name="Label" type="xs:string"/> </xs:complexType> </xs:element> </xs:schema> This way, I could traverse each XmlSchemaParticle of the compiled schema to get every single attribute for a specific element, even when its attributes are defined in an external schema. At the moment, when I get the possible attributes for a "Binding" element, I only get the "Name" attribute since it is originally defined in the "master" schema. What would be the possible solutions to this problem? Thanks! Satixx

    Read the article

  • Postfix Submission port issue

    - by RevSpot
    I have setup postfix+mailman on my debian server and i have an issue with postfix submission port. My ISP blocks SMTP on port 25 to prevent *spams and i must to use submission port (587). I have uncomment the following line from master.cf (/etc/postfix/) but nothing happens. submission inet n - - - - smtpd This is my mail logs file when i try to invite a user to mailman list Nov 6 00:35:34 myhostname postfix/qmgr[1763]: C90BF1060D: from=<[email protected]>, size=1743, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: DF54B10608: from=<[email protected]>, size=488, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: 80F0D10609: from=<[email protected]>, size=483, nrcpt=1 (queue active) Nov 6 00:35:55 myhostname postfix/smtp[2269]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2270]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2271]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2269]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2270]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2271]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2269]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2270]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname4 postfix/smtp[2271]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2269]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2270]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2271]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: C90BF1060D: to=<[email protected]>, relay=none, delay=23711, delays=23606/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: DF54B10608: to=<[email protected]>, relay=none, delay=23882, delays=23777/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: 80F0D10609: to=<[email protected]>, relay=none, delay=23875, delays=23770/0.04/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) main.cf smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.mydomain.com, localhost.mydomain.com,localhost relayhost = relay_domains = $mydestination, mail.mydomain.com relay_recipient_maps = hash:/var/lib/mailman/data/virtual-mailman transport_maps = hash:/etc/postfix/transport mailman_destination_recipient_limit = 1 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all local_recipient_maps = master.cf smtp inet n - - - - smtpd submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}

    Read the article

  • MySQL replication Slave_IO_Running: No

    - by Christy
    Hi all, I have two servers that I am trying to get replication of one database between. I found a setup guide on sourceforge that I followed and I have tried various other settings since then, but no matter what I do, when I start the slave, the 'Slave_IO_Running' setting is always No.... I have no idea why or what to look at, any suggestions are appreciated. The slave setup was: CHANGE MASTER TO MASTER_HOST='myserver.mydomain.net', MASTER_USER='slave_user', 'MASTER_PASSWORD='mypassword', 'MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=1368363 (last data from today, trying to do setup again. I deleted and recreated the database on the slave from a new dump and tried to redo the setup.) I have slave_user setup for %, localhost, and the specific IP of the slave computer but nothing seems to be working... Thanks in advance for any advice or suggestions

    Read the article

  • Can't connect to MySQL server on '127.0.0.1' + Postfix

    - by Andrew Dakin
    I just installed Postfix and configured it to use MySQL. It wasn't sending any emails out after I did that so I checked /var/log/mail.log and it came back with this: postfix/trivial-rewrite[5283]: fatal: proxy:mysql:/etc/postfix/mysql-domains.cf(0,lock|fold_fix): table lookup problem postfix/cleanup[5258]: warning: AFCDC30437: virtual_alias_maps map lookup problem for [email protected] postfix/master[4761]: warning: process /usr/lib/postfix/trivial-rewrite pid 5282 exit status 1 postfix/proxymap[4126]: warning: connect to mysql server 127.0.0.1: Can't connect to MySQL server on '127.0.0.1' (110) In mysql-domains.cf I'm using: Hosts 127.0.0.1 I can connect to MySQL with this: mysql -u postfixuser -p But I can't connect this way: mysql -u postfixuser -h 127.0.0.1 -p maildbname Also when I run netstat -l it comes back with: tcp 0 0 localhost:mysql *:* LISTEN I've tried changing my hosts to: Hosts localhost But then I just get a socket error: postfix/cleanup[4870]: warning: connect to mysql server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' I also have this set up in the MySQL config file: bind-address = 127.0.0.1 I'm sure I'm missing something obvious, but I am pretty new to all this. Thanks! Andy

    Read the article

  • Cucumber can't find installed gems

    - by artemave
    environment/cucumber.rb: ... # gem dependencies config.gem 'cucumber-rails', :lib => false, :version => '>=0.3.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'database_cleaner', :lib => false, :version => '>=0.5.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'webrat', :lib => false, :version => '>=0.7.0' unless File.directory?(File.join(Rails.root, 'vend config.gem 'spork', :lib => false, :version => '>=0.7.5' unless File.directory?(File.join(Rails.root, 'vend config.gem 'factory_girl', :source => 'http://gemcutter.org' config.gem 'selenium-client', :lib => false config.gem 'Selenium', :lib => false config.gem 'rspec', :lib => 'spec' config.gem 'rspec-rails', :lib => 'spec/rails' config.gem 'test-unit', :lib => false Running cucumber gives missing gems error: artem:~/projects/food4feed (master)$ cucumber ... no such file to load -- test-unit /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/rails/gem_dependency.rb:208:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `block in load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:307:in `load_gems' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:169:in `process' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/rails-2.3.5/lib/initializer.rb:113:in `run' /home/artem/projects/food4feed/config/environment.rb:9:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/projects/food4feed/features/support/env.rb:12:in `block in <top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/spork-0.8.1/lib/spork.rb:23:in `prefork' /home/artem/projects/food4feed/features/support/env.rb:10:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/polyglot-0.3.0/lib/polyglot.rb:65:in `require' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/rb_support/rb_language.rb:124:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:85:in `load_code_file' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:77:in `block in load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `each' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/step_mother.rb:76:in `load_code_files' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:48:in `execute!' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/lib/cucumber/cli/main.rb:20:in `execute' /home/artem/.rvm/gems/ruby-1.9.1-p378/gems/cucumber-0.6.3/bin/cucumber:8:in `<top (required)>' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `load' /home/artem/.rvm/gems/ruby-1.9.1-p378/bin/cucumber:19:in `<main>' Missing these required gems: selenium-client Selenium rspec-rails test-unit You're running: ruby 1.9.1.378 at /home/artem/.rvm/rubies/ruby-1.9.1-p378/bin/ruby rubygems 1.3.5 at /home/artem/.rvm/gems/ruby-1.9.1-p378, /home/artem/.rvm/gems/ruby-1.9.1-p378%global All gems are obviously there: artem:~/projects/food4feed (master)$ gem list | egrep "elenium|rspec|test-unit" rspec (1.3.0) rspec-rails (1.3.2) Selenium (1.1.14) selenium-client (1.2.18) test-unit (2.0.7) The even more confusing part is that it only complains about certain gems. factory_girl and rspec don't cause problems. Any idea what is going on? My environment: Rails 2.3.5 cucumber (0.6.3) cucumber-rails (0.3.0)

    Read the article

  • MySql Replication with a star topology

    - by Riotopsys
    My company currently operates in 3 separate locations connected by slow vpn links. Each site hosts a dedicated MySql server. I need to aggregate the data from all three of them onto a single server for corporate reporting. The powers that be have stated I cannot use circular replication or federated tables. Is there a third party tool for MySql that can replicate from multiple masters? Basically the diagram would be a daisy with the reporting server slave at center with multiple replication connections coming in from the master sites on the petals.

    Read the article

  • Update RDS db via mysqlbinlog: "you need (at least one of) the SUPER privilege(s)"

    - by timoxley
    We are moving a production site to EC2/RDS Followed these instructions: http://geehwan.posterous.com/moving-a-production-mysql-database-to-amazon I have set up row-based binary logging on the production server did a: mysqldump --single-transaction --master-data=2 -C -q -u root -p backup.sql then imported to RDS instance. No dramas. Due to the size of the db, and minimal downtime requirements, I've got to update the ec2 db to the latest datas via the binlogs, and it won't let me. mysqlbinlog mysql-bin.000004 --start-position=360812488 | mysql -uroot -p -h and it says: ERROR 1227 (42000) at line 6: Access denied; you need (at least one of) the SUPER privilege(s) for this operation My guess, based on what is on line 6 of the binlog, is that it's the 'write to the BINLOG' statements in the SQL backup, and because RDS doesn't support this, it can't run these statements, or something, I don't really know. Please help.

    Read the article

  • sudo: must be setuid root

    - by Phuong Nguyen
    Recently, due to some messy stuff with master boot record, I have to re-install my Ubuntu. Before doing that, I back up all folder (exclude root, bin, sbin, tmp, media, mnt) to a NTFS partition. After installation of Ubuntu, I copied back all the folder using a nautilus (running by sudo nautilus). After that, I reboot my computer. And boom, now I cannot run sudo any more, my network services cannot run. When I run sudo from a terminal, I ge "must be setuid root" error. In ubuntu, root account is disabled by default, I don't know why all these files is no longer under ownership of my account. How would I recover back?

    Read the article

  • BIND authoritative name server: SERVFAIL?

    - by Luca Tettamanti
    I have a BIND 9.6 instance that acts as a caching NS for the whole building and is also authoritative for an internal zone ("example" below): zone "example" { type master; file "example"; update-policy { grant dhcp-update subdomain example. A TXT; }; }; Due to a rogue switch we lost connectivity with the rest of the world, and the NS started answering SERVFAIL; what surprised me was that the server was also unable to respond to queries for the example domain. What is the reason of this behavior? Shouldn't the NS be able to answer since it has authoritative data? edit: The rest of the configuration is the standard one shipped with Debian: hints for the root servers and the zones for localhost and broadcast.

    Read the article

  • Don't know why this small shell script wont work

    - by tominated
    Hi, I'm trying to make a small script to start up gunicorn for a python website I'm making. I have modified the script found at https://github.com/benoitc/gunicorn/blob/master/examples/gunicorn_rc slightly. Here's my version. #!/bin/sh GUNICORN=/usr/local/bin/gunicorn ROOT=/srv/mobile-site/app PID=/var/run/gunicorn.pid APP=mobilecms:app if [ -f $PID ]; then rm $PID fi cd $ROOT exec $GUNICORN -b 127.0.0.1:8080 -w 8 -k gevent --pidfile=$PID $APP When I try to run the script though, it shows this error /etc/init.d/gunicorn: 13: Syntax error: end of file unexpected (expecting "fi") Does anyone know what's wrong?

    Read the article

  • SQL Server 2005 to 2008 upgrade - are MDF files binary compatible?

    - by james
    I have 50 databases on a MS SQL Server 2005 system and want to upgrade to MS SQL Server 2008. This is what I tried on some test machines: 1. copied the \DATA directory from the source (MSSQL 2005) to exactly the same path on the target (MSSQL 2008) server. 2. edited the startup parameters on the MSSQL 2008 service to point to the path of the MSSQL 2005 master database. 3. restarted MSSQL service It worked and I can access all databases, tables and data. My questions are: I go back to SQL Server 4.2 and it has never been this easy. I know it worked, but should have it worked? Am I missing something, or is there going to be a gotcha next week? These are simple databases, with just tables, views and indexes. No cross database links, no triggers etc

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • Creating an OS X Mountain Lion 10.8 OpenDIrectory database with a specific domain in mind

    - by whardier
    Mountain Lion Server has been one of the most infuriating things ever recently. Above and beyond tons of crashing I've been completely unable to do the following in a way that makes sense to both me and Apple. Name your server srv01.myoffice.domain.com Create an OpenDirectory database as a Master from scratch as dc=domain,dc=com I can rename my server temporarily (taking down vital services in the process due to the automagicliciousness) and create a new profile, however when I switch back to the original domain name the default search base for server related authentication magic is now dc=srv01,dc=myoffice,dc=domain,dc=com. I've tried everything I can think of including using slapconfig backupdb/restoredb and slaving the server off of another then promoting it. This seems rather silly and Apple shows no response to many requests to resolve this. Does anybody out there have the magic to have OpenDirectory work as it should.. being able to give it any domain you want and then having all vital services operate correctly.

    Read the article

  • Puppet - how can i copy a file to several user folders?

    - by Eliot Rocha
    Well i was using the info on this: Puppet - Any way to copy predefined custom configuration files for software on clients from the puppet master (host)? But i need some more elaborated, because i have several Desktops and are in use by 2 or 3 users each one, so i want to make a class for copy a shortcut in his desktops. The computers are joined to a domain, so any user can log in any desktop, and his profile is created in every desktop. I've tryed with this: class applink { file { "/home/installer/Escritorio/Workdesktop.desktop": owner => installer, group => root, mode => 770, source => "puppet://$server/files/Workdesktop.desktop" } This is only for one user called "installer", how can do this for several users? Can i use $USER for do this? Any Thoughts? Thank You!

    Read the article

  • Help setting up NSD daemon (DNS server)

    - by Catalin
    While searching for a secure dns server I came across the NSD project. I was really impressed by what seemed to me the best option out there that's open source. One problem thought their 'tutorial' is really not beginner friendly. I have basic DNS knoledge but what's in there is out of my league. I need to have multiple sites on this CentOS server I've recently got my hands on. They also need to receive email. Details: I have a master host and would love to set this in the way described in the rows that follow: masterhost.com -> ns1.masterhost.com mail.masterhost.com www.masterhost.com addonhost.com -> ns1.masterhost.com mail.masterhost.com www.addonhost.com And so on. Any help in setting up this DNS server please? All answers and suggestions are welcomed. Thank you in advance.

    Read the article

  • Transferring FSMO roles over vpn

    - by Tom Bowman
    I have a server located at one of our offices which is quite old and is due to be upgraded soon, this server holds the FSMO roles, I have another server in another office, both are DC's in the same domain and both are replicated, both run Server 2003 standard. I need to transfer the FSMO roles from the old server to the the one I have in the other office before I upgrade. Also I am looking at bringing in Exchange 2010 server however I cant install/configure that until I transfer the roles as it needs to be at the same site as the schema master. My question really is as both servers replicate over a vpn, how quickly will the roles transfer and will there be downtime as I need to make sure that while the transfer is running, both servers will service logon's and share files. or would it be better to do it out of hours? many thanks and apologies if I've missed out anything Regards Tom

    Read the article

  • Returning row values based on conditional formatting variables

    - by Mike Bodes
    I'm not entirely sure how to properly explain this, but here we go... I'm trying to create a single budgeting document that allows me to manage purchasing and reconciliation for multiple projects. I would like to create separate sheets per project and have purchased items populate on a master sheet. Using conditional formatting, I've set one of the columns to display an item's status (waiting for approval, approved, ordered, received). I would like the contents of an entire row to populate in a new sheet table once the status is set to "Received." The sheet should update descendingly. I can't attach an image because I don't have a 10 reputation.. Any help is greatly appreciated.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >