Search Results

Search found 3541 results on 142 pages for 'idiomatic perl'.

Page 131/142 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Regular Expressions Reference Tables Updated

    - by Jan Goyvaerts
    The regular expressions reference on the Regular-Expressions.info website was completely overhauled with the big update of that site last month. In the past, the reference section consisted of two parts. One part was a summary of the regex features commonly found in Perl-style regex flavors with short descriptions and examples. This part of the reference ignored differences between regex flavors and omitted most features that don’t have wide support. The other part was a regular expression flavor comparison that listed many more regex features along with YES/no indicators for many regex flavors, but without any explanations of the features. When reworking the site, I wanted to make the reference section more detailed, with descriptions and examples of all the syntax supported by the flavors discussed on the site. Doing that resulted in a reference that lists many features that are only supported by a few regex flavors. For such a reference to be usable, it needs to indicate which flavors support each feature. My original design for the new reference table used two rows for each feature. The first row had 4 columns with a label, syntax, description, and example, similar to the old reference tables. The second row had 20 columns indicating which versions of which flavors support these features. While the double-row design allowed all the information to fit within the table without requiring horizontal scrolling, it made it more difficult to quickly scan the tables for the feature you’re looking for. To make the new reference tables easier to read, they now have only a single row for each feature. The first 4 columns are the same as before. The remaining two columns show which versions of two regular expression flavors support the feature. You can use the drop-down lists above the table to choose the flavors the table should indicate. The site uses cookies to allow the flavor choices to persist while you navigate the reference. The result of this latest update is that the new regex tables are now just as easy to read as the ten-year-old tables on the old site were, while still covering all the features big and small of all the flavors discussed on the site.

    Read the article

  • Site Search Engine for 1,000 page website

    - by Ian
    I manage a website with about 1,000 articles that need to be searchable by my members. The site search engines I've tried all had their own problems: Fluid Dynamics Search Engine Since it's written in perl, it was a bit hacky to integrate with my PHP-based CMS. I basically had to file_get_contents the search results page. However, FDSE had the best search results. Google CSE Ugh, the search results SUCK. It can't find documents even using unique strings. I'm so surprised that a Google search product is this bad. Nor can I get any answers on their 'help' forums, and I am a paying user. Boo, Google. Boo. Sphider Again, bad search results. Unable to locate some phrases used in link text. Better results than Google CSE though. Shame on Google that a free PHP script has better search results than their paid application. IndexTank This one looked really promising. I got all set up with their PHP API client. But it would only randomly add articles that I submitted. Out of 700+ articles I pushed to the index through their API, only 8 made it in. Unable to find any help on this subject. Update for IndexTank -- Got the above issue fixed, so this looks most promising so far. The site itself runs on php/mysql and FreeBSD, though this shouldn't matter for a web crawling indexer. I've looked at Lucene, but I don't know anything about Java or installing Java programs on my web server. I also do not have root access on my web server, if this would be required for installation. I really don't need a lot of fancy features. It just needs to be able to crawl my web site and return great (even decent!) search results. I don't need any crazy search operators. It doesn't need to index off my primary domain. It just needs to work! Thanks, Hive Mind!

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Au revoir, Python?

    - by GuySmiley
    I'm an ex-C++ programmer who's recently discovered (and fallen head-over-heels with) Python. I've taken some time to become reasonably fluent in Python, but I've encountered some troubling realities that may lead me to drop it as my language of choice, at least for the time being. I'm writing this in the hopes that someone out there can talk me out of it by convincing me that my concerns are easily circumvented within the bounds of the python universe. I picked up python while looking for a single flexible language that will allow me to build end-to-end working systems quickly on a variety of platforms. These include: - web services - mobile apps - cross-platform client apps for PC Development speed is more of a priority at the time-being than execution speed. However, in order to improve performance over time without requiring major re-writes or architectural changes I think it's imperative to be able to interface easily with Java. That way, I can use Java to optimize specific components as the application scales, without throwing away any code. As far as I can tell, my requirement for an enterprise-capable, platform-independent, fast language with a large developer base means it would have to be Java. .NET or C++ would not cut it due to their respective limitations. Also Java is clearly de rigeur for most mobile platforms. Unfortunately, tragically, there doesn't seem to be a good way to meet all these demands. Jython seems to be what I'm looking for in principle, except that it appears to be practically dead, with no one developing, supporting, or using it to any great degree. And also Jython seems too married to the Java libraries, as you can't use many of the CPython standard libraries with it, which has a major impact on the code you end up writing. The only other option that I can see is to use JPype wrapped in marshalling classes, which may work although it seems like a pain and I wonder if it would be worth it in the long run. On the other hand, everything I'm looking for seems to be readily available by using JRuby, which seems to be much better supported. As things stand, I think this is my best option. I'm sad about this because I absolutely love everything about Python, including the syntax. The perl-like constructs in Ruby just feel like such a step backwards to me in terms of readability, but at the end of the day most of the benefits of python are available in Ruby as well. So I ask you - am I missing something here? Much of what I've said is based on what I've read, so is this summary of the current landscape accurate, or is there some magical solution to the Python-Java divide that will snuff these concerns and allow me to comfortably stay in my happy Python place?

    Read the article

  • Am I experienced enough to learn and develop immediately using Ruby on Rails?

    - by acheong87
    General Question I understand that discussions revolving around questions of this form run the risk of becoming too specific to help others. So, perhaps a better, general question would be: What kind of experience, if any, translates easily to Ruby on Rails; and if none, then what's the learning curve like, in comparison to other popular languages? Background I have the opportunity to build a website using whatever technologies I wish to use. It's a fairly simple website, for listing products, taking payments, managing customer data, providing a back-end portal for employees to manage data, possibly hooking in flight information (the products are travel related), possibly integrating a blog and all the social-networking goodies. Specific Problem I have to let the client know by tonight whether I'm interested in taking up this project, before he talks to other potential developers, but I'm on the fence. I already work a full-time C++ development job, so the money doesn't do it for me. It's the opportunity to (be paid to) learn some new technologies and to have a real, running product in the end. I've heard and read great things about Ruby, and am really intrigued. I zipped through some introductory Ruby tutorials, no sweat. However I found the Rails tutorials a little overwhelming, especially not being able to try it out anywhere. And researching Rails hosts like Heroku and EngineYard makes me think that maybe I don't know what I'm getting myself into. The ship's leaving port! I wish I had more time to learn, better yet play with the language, but I have to decide soon! Should I venture or pass? Additional Details My experiences are in C/C++/Tcl/Perl/PHP/jQuery, and basic knowledge of Java/C#. I didn't study C.S. formally so I wasn't exposed to design principles, programming paradigms, etc., which is my greatest concern. Will my lack of understanding in this realm make RoR frustrating to learn? Will it be so incompatible with a C++ "way" of thinking that I'll wish I never started? Am I putting my client at risk by attempting this? If it helps, I'm quick to learn new things (self-taught so far) and care a great deal about correctness, using things for their intended purposes, and so on. I've read numerous recommendations of Agile Development with Rails and would love to read it (though perhaps, while developing in parallel, for shortness of time). Worse comes to worst, I'd give up and do the standard LAMP gig, of course, not charging the client for wasted time. But I'm hoping to avoid the project altogether if it's gonna come down to that! Thanks in advance for any tips, insights, votes of confidence, votes of discouragement (for the better), and such.

    Read the article

  • rkhunter 1.4 different results than version before?

    - by dschinn1001
    with rkhunter version before ubuntu-update from 12.04 to 12.10 I had NOT these warnings like listed here: Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ Warning ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/tcpd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/curl [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ Warning ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ Warning ] /usr/bin/locate [ Warning ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ Warning ] /usr/bin/lynx [ Warning ] /usr/bin/mail [ Warning ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ Warning ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ Warning ] /usr/bin/rkhunter [ Warning ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ Warning ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/gawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ Warning ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ Warning ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ Warning ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ Warning ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ Warning ] /bin/dash [ Warning ] It seems that rkhunter 1.4 is oversensitive somehow about changed bin-files ? chkrootkit finds nothing and no warnings too.

    Read the article

  • My Package Version Number Appears Greater Yet apt-get Doesn't Select It

    - by nutznboltz
    Backstory: It was determined that when using lxc container VMs the Nagios nrpe shutdown script when run on the host of the containers would kill the nrpe processes inside the containers. This was remediated by changing the script to use pidfiles instead of searching the process table for the nrpe process. Regrettably start-stop-daemon is a C program that resulted from translating a Perl script and it shows. There are far too many global varibles in start-stop-daemon.c and although there are some nice blocks of comments there are far to few comments that explain the intent behind variable names such as "schedule" (the string "schedule" appears in many contexts.) The manual page for start-stop-daemon strongly suggests that unless you use the "--retry" option the start-stop-daemon program may return before the process it sent a signal to actually calls exit() and terminates, however it doesn't actually state this in plain English. The obtuseness of start-stop-daemon is most likely the reason that the "fixed" version of the script includes a dubious comment indicating that sometimes the pid file has not been removed. I can easily see why someone would not understand that he left the --retry option missing. This bug also causes failures when the script is given the "restart" option; the nrpe daemon will shutdown but not start up again. Did I mention that since applying the update our nrpe servers started crashing over and over? Repairing this is why I am doing this work. I have been working on remediating the fix. You can see my current work in this PPA. Actual Question: The upstream version number of nagios-nrpe-server in lucid-updates is 2.12-4ubuntu1.10.04.1 My PPA uses this version number 2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 I check the rules here and use this test program and I am lead to believe that the version number I use in my PPA is greater than the one in lucid-updates yet when I ran: sudo add-apt-repository ppa:nutznboltz/nrpe-unbreak-lp-600941 sudo apt-get update sudo aptitiude dist-upgrade The replacement package was not installed. I was able to install it using sudo aptitude install nagios-nrpe-server=2.12-4ubuntu1.10.04.1.1~ppa1~lucid1 Can anyone explain this behavior? Why didn't my version number appear greater to "aptitude dist-upgrade"? Thanks $ cat /etc/apt/preferences Package: * Pin: release a=lucid-backports Pin-Priority: 400 Package: * Pin: release a=lucid-security Pin-Priority: 990 Package: * Pin: release a=lucid-updates Pin-Priority: 900 Package: * Pin: release a=lucid-proposed Pin-Priority: 400 $ ls /etc/apt/preferences.d/ $ Should not make any difference as a PPA cannot be in any of those pockets. I went ahead and bumped the version number in the PPA to 2.12-4ubuntu1.10.04.2~ppa1~lucid1. I'll see if that makes a difference. I do notice that lintian complains: W: nagios-nrpe-server: debian-revision-not-well-formed 2.12-4ubuntu1.10.04.2~ppa1~lucid1

    Read the article

  • Ati Radeon HD 3200 Graphics driver - Installation Problem

    - by samufi
    I have a fresh installation of Ubuntu 12.04 x86 and I am trying to install the proprietary driver for my "Radeon HD 3200 Graphics" video card. I know that there are already many threads about this topic, but I did not find a solution for my problem: For the installation I followed exactly these instructions: What is the correct way to install ATI Catalyst Video Drivers in 12.04 LTS? During the process I faced these problems: I executed ~$ debconf libstdc++6 dkms libqtgui4 wget execstack libelfg0 dh-modaliases and got: debconf: DbDriver "passwords" warning: could not open /var/cache/debconf/passwords.dat: Keine Berechtigung Can't exec "libstdc++6": Datei oder Verzeichnis nicht gefunden at /usr/share/perl/5.14/IPC/Open3.pm line 186. open2: exec of libstdc++6 dkms libqtgui4 wget execstack libelfg0 dh-modaliases failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 (translation of the German parts: "Keine Berechtigung" means: "no permission"; "Datei oder Verzeichnis nicht gefunden" means: "File or folder not found") Because I had no idea if it was a big issue, I just continued: ~$ sudo apt-get install ia32-libs There I got: Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig Paket ia32-libs ist nicht verfügbar, wird aber von einem anderen Paket referenziert. Das kann heißen, dass das Paket fehlt, dass es abgelöst wurde oder nur aus einer anderen Quelle verfügbar ist. E: Paket »ia32-libs« hat keinen Installationskandidaten (Translation: [...] the package ia32-libs is not available but is referenced by an other package [...] E: package »ia32-libs« has no installation candidate) Once more I went on. The next steps worked quite fine. But when I came to the point: ~$ sudo dpkg -i *.deb There I got A popup message, something like there was a problem with a system application but in the terminal no errors were reported, also the packages seemed to be installed. so now the Ati Catalyst Center works amdcccle but fglrxinfo gave me X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 So there is something wrong. (Also there is not the possibility to enable these nice graphical features - the reason why I installed the proprietary driver) Because I worked with a completely fresh Installation I don't know how to fix the problem. If anybody could help I would be very tahnkful! =)

    Read the article

  • Should I be an algorithm developer, or java web frameworks type developer?

    - by Derek
    So - as I see it, there are really two kinds of developers. Those that do frameworks, web services, pretty-making front ends, etc etc. Then there are developers that write the algorithms that solve the problem. That is, unless the problem is "display this raw data in some meaningful way." In that case, the framework/web developer guy might be doing both jobs. So my basic problem is this. I have been an algorithms kind of software developer for a few years now. I double majored in Math and Computer science, and I have a master's in systems engineering. I have never done any web-dev work, with the exception of a couple minor jobs, and some hobby level stuff. I have been job interviewing lately, and this is what happens: Job is listed as "programmer- 5 years of experience with the following: C/C++, Java,Perl, Ruby, ant, blah blah blah" Recruiter calls me, says they want me to come in for interview In the interview, find out they have some webservices development, blah blah blah When asked in the interview, talk about my experience doing algorithms, optimization, blah blah..but very willing to learn new languages, frameworks, etc Get a call back saying "we didn't think you were a fit for the job you interviewed wtih, but our algorithm team got wind of you and wants to bring you on" This has happened to me a couple times now - see a vague-ish job description looking for a "programmer" Go in, find out they are doing some sort of web-based tool, maybe with some hardcore algorithms running in the background. interview with people for the web-based tool, but get an offer from the algorithms people. So the question is - which job is the better job? I basically just want to get a wide berth of experience at this level of my career, but are algorithm developers so much in demand? Even more so than all these supposed hot in demand web developer guys? Will I be ok in the long run if I go into the niche of math based algorithm development, and just little to no, or hobby level web-dev experience? I basically just don't want to pigeon hole myself this early. My salary is already starting to get pretty high - and I can see a company later on saying "we really need a web developer, but we'll hire this 50k/year college guy, instead of this 100k/year experience algorithm guy" Cliffs notes: I have been doing algorithm development. I consider myself to be a "good programmer." I would have no problem picking up web technologies and those sorts of frameworks. During job interviews, I keep getting "we think you've got a good skillset - talk to our algorithm team" instead of wanting me to learn new skills on the job to do their web services or whhatever other new technology they are doing. Edit: Whenever I am talking about algorithm development here - I am talking about the code that produces the answer. Typically I think of more math-based algorithms: solving a financial problem, solving a finite element method, image processing, etc

    Read the article

  • Using lookahead assertions in regular expressions

    - by Greg Jackson
    I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very "interesting" and non-obvious behaviors. It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as "experimental features"). This isn't the case quite as much, but there's still always the question as to how well it's supported. Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer was as long as the difference is negligible or unimportant to the application as a whole. Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) "The right tool for the right job" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today). My question is this: Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code? I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me.

    Read the article

  • Working with Windows and Unix

    - by user554629
    Beware of new line characters One of the most frequent issues we encounter in Tech Support is the corruption of files that are transferred between Windows and Unix.   The transfer can occur at any stage, but ultimately involves a transfer of a file using an ftp client that is running on Windows;  it could be ftp or filezilla. Windows uses two characters to mark the end of a line in a text file (CR/LF),carriage return, linefeed.   Unix uses a single character (CR). In all situations, it is best to use binary mode transfer for all files, including ascii text files. Common problems: upload a core file from unix to windows using ftp in ascii mode.The file is going to be larger on Windows than Unix.ftp doesn't know if this is a text file with real line-ends, it takes every ascii CR and transmits two ascii characters CR/LF.The core file, tar file, library ... will be corrupted when transferred to Oracle. download a shell script to Windows, and transfer it to Unix using ftpIf the file is edited on Windows, the unix script line-end chars will be doubled.Unix doesn't know how to handle that, and will likely tell you the script is not executable.Why?  The first line of a shell script ( called "sh-bang" ), identifies the command interpreter the unix shell should use for this script.   Common examples:#/bin/sh#/bin/ksh#/bin/bash#/bin/perl#/bin/sh^M    # will not be understood.#/bin/env ksh # special syntax.  Find ksh and run it dos2unix is a common utility found on most unix platforms, that repairs the issue of Windows LineEnd characters in unix script files.   I've written my own flavor of this utility for use in Tech Support and build environments, that is a bit easier to use, and has some nice side-effects. accepts a list of files:   dos2unix *.sh repairs the file in-place.  Doesn't generate a new file you have to name retains the same timestamp;  it is the encoding that changed, not the file content. Here are the versions of dos2unix for each of the environments we work in.They are compressed with gzip, to avoid the ftp ascii transfer trap,and because I am quite limited in the number of files I can upload to this blog. AIX Linux Solaris sparc  Windows 

    Read the article

  • Imperative vs. component based programming [closed]

    - by AlexW
    I've been thinking about how programming and more specifically the teaching of programming is advocated amongst the community (online). Often I've heard that Ruby and RoR is an ideal platform for learning to program. I completely disagree... RoR and Ruby are based on the application of the component based paradigm, which means they are ideal for rapid application development. This is much like the MVC model in PHP and ASP.NET But, learning a proper imperative language like Java or C/C++ (or even Perl and PHP) is the only way for a new programmer to explore logic itself, and not get too bogged down in architectural concerns like the need for separation of concerns, and the preference for components. Maybe it's a personal preference thing. I rather think that the most interesting aspects to programming are the procedural bits of code I write that actually do stuff rather than the project planning, and modelling that comes about from fully object oriented engineering or simply using the MVC model. I know this may sound confused to some of you. I feel strongly though that the best way for programming to be taught is through imperative and procedural methods. Architectural (component) methods come later, if at all. After all, none of the amazing algorithms that exist were based on OOP practice! It's all procedural code when it comes to the 'magic'. OOP is useful in creating products and utilities. Algorithms are what makes things happen, and move data around, and so imperative (and/or procedural) code are what matters most. When I see programmers recommending Ruby on Rails to newbie developers, I think it's just so wrong. Just because you write less code with Ruby does not make it easier to do! It's the opposite... you have to know loads more to appreciate its succinct nature. New coders who really want to understand the nuts and bolts of coding need to go away and figure out writing methods/functions (i.e. imperative programming) and working in procedural style, in order to grasp the fundamentals, first, before looking into architectural ways of working. So, my question is: should Ruby ever be recommended as a first language? I think no (obviously)... what arguments are there for it?

    Read the article

  • Watchguard Firewall WebBlocker Regular Expression for Multiple Domains?

    - by Eric
    I'm pretty sure this is really a regex question, so you can skip to REGEX QUESTION if you want to skip the background. Our primary firewall is a Watchguard X750e running Fireware XTM v11.2. We're using webblocker to block most of the categories, and I'm allowing needed sites as they arise. Some sites are simple to add as exceptions, like Pandora radio. That one is just a pattern matched exception with "padnora.com/" as the pattern. All traffic from anywhere on pandora.com is allowed. I'm running into trouble on more sophisticated domains that reference content off of their base domains. We'll take GrooveShark as a sample. If you go to http://grooveshark.com/ and view page source, you'll see hrefs referring to gs-cdn.net as well as grooveshar.com. So adding a WebBlocker exception to grooveshark.com/ is not effective, and I have to add a second rule allowing gs-cdn.net/ as well. I see that the WebBlocker allows regex rules, so what I'd like to do in situations like this is create a single regex rule that allows traffic to all the needed domains. REGEX QUESTION: I'd like to try a regex that matches grooveshark.com/ and gs-cdn.net/. If anybody can help me write that regex, I'd appreciate it. Here is what is in the WatchGuard documentation from that section: Regular expression Regular expression matches use a Perl-compatible regular expression to make a match. For example, .[onc][eor][gtm] matches .org, .net, .com, or any other three-letter combination of one letter from each bracket, in order. Be sure to drop the leading “http://” Supports wild cards used in shell script. For example, the expression “(www)?.watchguard.[com|org|net]” will match URL paths including www.watchguard.com, www.watchguard.net, and www.watchguard.org. Thanks all!

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • MySQL won't start or won't installed

    - by Owen
    Hi there, I'm trying to get a local LAMP setup on my Ubuntu desktop. I'm successfully got PHP install but I'm having trouble with MySQL If PHP tries to connet to MySQL I get this error: Warning: mysql_connect() [function.mysql-connect]: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) in /var/www/testing.php on line 3 Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) If I try via command line I get much the same error: owen@desktop:~$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (13) Weirdly "/var/run/mysqld" does not exist. Running a whereis command I get the following: owen@desktop:~$ whereis mysqld.sock mysqld: /usr/sbin/mysqld /usr/share/man/man8/mysqld.8.gz So is MySQL even installed? Well acording to dpkg owen@desktop:~$ dpkg -l | grep mysql ii libapache2-mod-auth-mysql 4.3.9-13ubuntu1 Apache 2 module for MySQL authentication ii libdbd-mysql-perl 4.016-1 Perl5 database interface to the MySQL database ii libmysqlclient15off 5.1.30really5.0.83-0ubuntu3 MySQL database client library ii libmysqlclient16 5.1.49-1ubuntu8.1 MySQL database client library ii mysql-admin 5.0r14+openSUSE-2.1 GUI tool for intuitive MySQL administration ii mysql-client-5.1 5.1.49-1ubuntu8.1 MySQL database client binaries ii mysql-client-core-5.1 5.1.49-1ubuntu8.1 MySQL database core client binaries ii mysql-common 5.1.49-1ubuntu8.1 MySQL database common files, e.g. /etc/mysql/my.cnf ii mysql-gui-tools-common 5.0r14+openSUSE-2.1 Architecture independent files for MySQL GUI Tools ii mysql-query-browser 5.0r14+openSUSE-2.1 Official GUI tool to query MySQL database ii mysql-server 5.1.49-1ubuntu8.1 MySQL database server (metapackage depending on the latest version) ii mysql-server-5.1 5.1.49-1ubuntu8.1 MySQL database server binaries and system database setup ii mysql-server-core-5.0 5.1.30really5.0.83-0ubuntu3 MySQL database core server files ii mysql-server-core-5.1 5.1.49-1ubuntu8.1 MySQL database server binaries ii php5-mysql Can someone please help I'm really confused as what to do next. I'm not a Linux expert at all most of these commands I've ran I found of diffrent blogs and help forums.

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • XAMPP - Apache service stops running after few seconds.

    - by Fábio Antunes
    Hello I have this big problem with my Xampp server, for some reason the Apache service stops running after a few seconds it as been started, and i have no idea what the problem is, and the error logs don't say much about the problem. [Fri May 07 01:09:32 2010] [notice] Digest: generating secret for digest authentication ... [Fri May 07 01:09:32 2010] [notice] Digest: done [Fri May 07 01:09:33 2010] [notice] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations [Fri May 07 01:09:33 2010] [notice] Server built: Nov 11 2009 14:29:03 [Fri May 07 01:09:33 2010] [crit] (22)Invalid argument: Parent: Failed to create the child process. [Fri May 07 01:09:33 2010] [crit] (OS 6)O identificador é inválido. : master_main: create child process failed. Exiting. [Fri May 07 01:09:33 2010] [notice] Parent: Forcing termination of child process 36 identificador é inválido (pt_PT) = identifier is invalid. Note: No other applications is using the Apache port. I have done some changes to the httpd.conf file but, it as worked well for allot of time. Added some virtual hosts. Enabled xdebug. As this happen to anyone, that could tell me whats the problem? Thanks for your time.

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • Delay before download starts when serving files using nginx

    - by glumbo
    I am currently using nginx to serve downloads off my website. Users sometimes need to wait about 5 seconds before their download starts after clicking a download link. I'm not sure if I need to start using raid 10 (I'm currently using raid 50) or if this is a problem with my nginx configuration. I am also on a 1gbit line but download sometimes go as low as 10kB/s. My server: Dual Xeon 5620 CPU, 12x2TB drives with 8GB ram. This is my nginx.conf #user nobody; worker_processes 12; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 2048; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; #sendfile on; #tcp_nopush on; reset_timedout_connection on; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65; limit_zone one $binary_remote_addr 10m; perl_modules perl; perl_require download.pm;

    Read the article

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • How \Deleted flag can be unset for all mails in cyrus-imapd mailbox?

    - by Sachin Divekar
    I have a 5GB mailbox which I moved using imapsync. But somehow I messed up with --delete/--delete2 option and end up with almost all the messages having \Deleted flag set. I do not have delayed expunge enabled, so I can not use unexpunge utility. I am using cyrus-imapd v2.3.7. Using cyrus-imapd's debugging feature I found out that email client(Roundcube in my case) fires following IMAP command to unset it. UID STORE 179 -FLAGS.SILENT (\Deleted) I don't know if somehow I can fire this command for all the mails. Is there any way I can unset \Deleted flag for all the mails in the mailbox? UPDATE: Using @geekosaur's tip of specifying range of message-ids in the above command, I could solve it for one mailbox under INBOX like INBOX.folder1. Is there any way I can do it for multiple mailboxes under INBOX recursively? Now I am working on solving it using/creating some script, maybe using Perl's IMAP related module. But still I need to solve it asap so inputs are welcome.

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • Completely automated DVD insert-rip-compress-eject workflow

    - by Kevin L.
    (Partially inspired by this question.) Background: I have a PC hidden away behind an HD LCD in custom-built entertainment center. The only visible part of the PC is an external DVD drive, mounted above the Wii. The PC happens to have Windows XP on it; Hackintoshing and Linux might be possible, but I've had issues with drivers for the sound card before. Let's just assume that OS X and Linux are a no-go unless they provide a truly awesome and simple solution for this particular problem. Goal: I would like to have a completely automated workflow for ripping DVDs. Something like this: Push the eject button on the DVD drive, insert the DVD. PC recognizes that this is a video DVD (as opposed to data). PC rips DVD to hard drive. PC finishes ripping, and ejects the DVD tray. PC compresses DVD image into some format that an Xbox 360 can read. PC copies finished compressed video file to a particular folder, so that it can be read into a WMP11 library and seamlessly played by the Xbox 360. PC cleans up all temporary files. Done. The impetus to have this be completely automated is that I’ll never need to switch the TV to the PC’s input and fiddle with the wireless keyboard. That’s just needless user intervention. The UI doesn’t have to be pretty. Nor do I care about speed. And I can probably bridge several of the gaps with some creative Perl use. But it seems likely that many (or all) of the parts should already exist. Any thoughts?

    Read the article

  • Linux/Unix MTA with the smartest queue?

    - by threecheeseopera
    I am looking for an MTA that will allow me (a script, really) to proactively manage it's send queue in response to status codes returned by the remote servers I am delivering to. Basically, for each mail sent I would like to be able to react to the SMTP reply code returned by the remote server, ex. '250 OK', or to any error conditions like connection timeouts. Additionally, I would like to be able to manage the send queue moving forward based on this information, e.g. 'example.com has timed out the last 5 connection attempts, so no longer queue mail for recipients @example.com'. I am currently using postfix and perl to parse it's logs for this information, but I am playing a game of catchup that is prone to errors (out-of-order log entries etc.) and it's starting to get messy (some real ugly regexes ;). I really don't want to reinvent the wheel and use some language's smtp library; i would prefer to use a proven/fast/reliable MTA. I am however open to suggestions if what I need just isn't possible. Thanks for your help!

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >