Search Results

Search found 19664 results on 787 pages for 'python for ever'.

Page 486/787 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Is Perl's flip-flop operator bugged? It has global state, how can I reset it?

    - by Evan Carroll
    I'm dismayed. Ok, so this was probably the most fun perl bug I've ever found. Even today I'm learning new stuff about perl. Essentially, the flip-flop operator .. which returns false until the left-hand-side returns true, and then true until the right-hand-side returns false keep global state (or that is what I assume.) My question is can I reset it, (perhaps this would be a good addition to perl4-esque hardly ever used reset())? Or, is there no way to use this operator safely? I also don't see this (the global context bit) documented anywhere in perldoc perlop is this a mistake? Code use feature ':5.10'; use strict; use warnings; sub search { my $arr = shift; grep { !( /start/ .. /never_exist/ ) } @$arr; } my @foo = qw/foo bar start baz end quz quz/; my @bar = qw/foo bar start baz end quz quz/; say 'first shot - foo'; say for search \@foo; say 'second shot - bar'; say for search \@bar; Spoiler $ perl test.pl first shot foo bar second shot

    Read the article

  • VS2008 - Find and Replace - Searches too many files.

    - by Pam Bullock
    I've used VS2008 a lot and have never had this problem. However, I started a new job and am using a new machine. Ever since I've gotten here the VS Find feature has been acting funny. I first noticed it when I did a replace all for "All Open Files". The project wouldn't build because the values had actually been replaced in other files within the solution that were not open and didn't even open after I pressed replace all. I have found that I can never use replace all on this machine because I never know what it is going to do. Even if I just do a find on "Current Document", once it's done with the document and I should get that message that says "No more matches found" it actually OPENS another random file from my solution where there is a match and keeps on going. It seems to never make any difference what "Look in" option I've chosen. My coworker has an install off the same disk and claims to not be experiencing this. We're in the middle of a stressful, huge project with a close deadline so I know my boss won't let me do a reinstall. Has anyone else ever had this happen? Anyone know a fix?? Thanks, Pam

    Read the article

  • 2 Mutually exclusive RadioButton "Lists"

    - by user72603
    I think this has to be THE most frustrating thing I've ever done in web forms. Yet one would think it would be the easiest of all things in the world to do. That is this: I need 2 separate lists of radiobuttons on my .aspx page. One set allows a customer to select an option. The other set does also but for a different purpose. But only one set can have a selected radiobutton. Ok I've tried this using 2 asp.net Radiobuttonlists controls on the same page. Got around the nasty bug with GroupName (asp.net assigns the control's uniqueID which prevents the groupname from ever working because now, 2 radiobuttonlists can't have the same groupname for all their radiobuttons because each radiobuttonlist has a different uniqueID thus the bug assigns the unique ID as the name attribute when the buttons are rendered. since the name sets are different, they are not mutually exclusive). Anyway, so I created that custom RadioButtonListcontrol and fixed that groupname problem. But when ended up happening is when I went to put 2 instances of my new custom radiobuttonlist control on my .aspx page, all was swell until I noticed that every time I checked for radiobuttonlist1.SelectedValue or radiobuttonlist2.SelectedValue (did not matter which I was checking) the value always spit back string.empty and i was not able to figure out why (see http://forums.asp.net/t/1401117.aspx). Ok onto the third try tonight and into the break of dawn (no sleep). I tried to instead just scrap trying to use 2 custom radiobuttonlists altogether because of that string.empty issue and try to spit out 2 sets of radiobuttonlists via using 2 asp.net repeaters and a standard input HTML tag inside. Got that working. Ok but the 2 lists still are not mutually exclusive. I can select a value in the first set of radiobuttons from repeater1 and same goes for repeater2. I cannot for the life of me get the "sets" to be mutually exclusive sets of radiobuttons.

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • svnserve not strictly required?

    - by Kev
    I was reading the Red Bean book and noticed this paragraph: Do not be seduced by the simple idea of having all of your users access a repository directly via file:// URLs. Even if the repository is readily available to everyone via a network share, this is a bad idea. It removes any layers of protection between the users and the repository: users can accidentally (or intentionally) corrupt the repository database, it becomes hard to take the repository offline for inspection or upgrade, and it can lead to a mess of file permission problems (see the section called “Supporting Multiple Repository Access Methods”). Note that this is also one of the reasons we warn against accessing repositories via svn+ssh:// URLs—from a security standpoint, it's effectively the same as local users accessing via file://, and it can entail all the same problems if the administrator isn't careful. I realized that, since I'm the only one accessing the repository, ever, none of these caveats seem to apply. Can I safely down svnserve then and only ever have to worry about upgrading my TortoiseSVN client, not both the client and the server whenever there's a new version out? (I've tried it already--just needed to use the Relocate feature to switch from svn:// to file://--but I wanted to make sure something wouldn't be sneaking up on me if I left it this way.)

    Read the article

  • How important is it to use SSL?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • How can I test a CRON job with PHP?

    - by alex
    This is the first time I've ever used a CRON. I'm using it to parse external data that is automatically FTP'd to a subdirectory on our site. I have created a controller and model which handles the data. I can access the URL fine in my browser and it works (however I will be restricting this soon). My problem is, how can I test if it's working? I've added this to my controller for a quick and dirty log $file = 'test.txt'; $contents = ''; if (file_exists($file)) { $contents = file_get_contents($file); } $contents .= date('m-d-Y') . ' --- ' . PHP_SAPI . "\n\n"; file_put_contents($file, $contents); But so far only got requests logged from myself from the browser, despite having my CRON running ever minute. 03-18-2010 --- cgi-fcgi 03-18-2010 --- cgi-fcgi I've set it up using cPanel with the command index.php properties/update/ the 2nd portion is what I use to access the page in my browser. So how can I test this is working properly, and have I stuffed anything up? Note: I'm using Kohana 3. Many thanks

    Read the article

  • PHP import functions

    - by ninuhadida
    Hi, I'm trying to find the best pragmatic approach to import functions on the fly... let me explain. Say I have a directory called functions which has these files: array_select.func.php stat_mediam.func.php stat_mean.func.php ..... I would like to: load each individual file (which has a function defined inside) and use it just like an internal php function.. such as array_pop(), array_shift(), etc. Once I stumbled on a tutorial (which I can't find again now) that compiled user defined functions as part of a PHP installation.. Although that's not a very good solution because on shared/reseller hosting you can't recompile the PHP installation. I don't want to have conflicts with future versions of PHP / other extensions, i.e. if a function named X by me, is suddenly part of the internal php functions (even though it might not have the same functionality per se) I don't want PHP to throw a fatal error because of this and fail miserably. So the best method that I can think of is to check if a function is defined, using function_exists(), if so throw a notice so that it's easy to track in the log files, otherwise define the function. However that will probably translate to having a lot of include/require statement in other files where I need such a function, which I don't really like. Or possibly, read the directory and loop over each *.func.php file and include_once. Though I find this a bit ugly. The question is, have you ever stumbled upon some source code which handled such a case? How was it implemented? Did you ever do something similar? I need as much ideas as possible! :)

    Read the article

  • Redirecting from an update action to the referrer of the edit

    - by Mark Westling
    My Rails 2.3 application has a User model and the usual controller actions. The edit form can be reached two ways: when a user edits his own profile from the home page, or when an admin user edits someone else's profile from users collection. What I'd like to do is have the update action redirect back to the referred of the edit action, not the update action. If I do a simple redirect_to(:back) within update, it goes back to the edit form -- not good. One solution is to forget entirely about referrers and redirect based on the current_user and the updated user: if they're the same, go back to the home page, else go to the users collection page. This will break if I ever add a third path to the edit form. It's doubtful I'll ever do this but I'd prefer a solution that's not so brittle. Another solution is to store the referrer of edit form in a hidden field and then redirect to this value from inside the update action. This doesn't feel quite right, though I can't explain why. Are there any better approaches? Or, should I stop worrying and go with one of the two I've mentioned?

    Read the article

  • Core Data: Overkill for simple, static UITableView-based iPhone App?

    - by David Foster
    Hello! I have a rather simple iPhone app consisting of numerous views containing a single, grouped table view. These views are held together in navigation controllers which are grouped in a tab bar. Simple stuff. My table views do little more than list text (like "Dog", "Cat" and "Weasel") and this data is being served from a collection of plists. It's perhaps worth mentioning too that these tables are 'static' in the sense that their data is pre-determined and will only ever be amended—and if so, very rarely indeed—by the developer (in this case, moi). This rudimentary approach has reached its limits though, and I think I'm going to need something a bit more relational. I have worked a tad with Core Data in the past, but only with apps whose data is determined by user input. I have four closely related questions: Is Core Data overkill for an app consisting mainly of a selection of simple table views? Do you recommend using Core Data to manage data which is predetermine and extremely unlikely to ever change? Can one lock Core Data down so that its data can't change, thereby relinquishing my responsibility as the developer to handle the editing and saving of the managed object context? How do I go about giving Core Data my predetermined data, and in a format I know that it can work with? Thanks a bunch guys.

    Read the article

  • Free solution for automatic updates with .NET/C#?

    - by a2h
    Yes, from searching I can see this has been asked time and time again. Here's a backstory. I'm an individual hobbyist developer with zero budget. A program I've been developing has been in need of constant bugfixes, and me and users are getting tired of having to manually update. Me, because my current solution of Manually FTP to my website Update a file "newest.txt" with the newest version Update index.html with a link to the newest version Hope for people to see the "there's an update" message Have them manually download the update sucks, and whenever I screw up an update, I get pitchforks. Users, because, well, "Are you ever going to implement auto-update?" "Will there ever be an auto-update feature?" Over the past few hours I have looked into: http://windowsclient.net/articles/appupdater.aspx - I can't comprehend the documentation http://www.codeproject.com/KB/vb/Auto_Update_Revisited.aspx - Doesn't appear to support anything other than working with files that aren't in use http://wyday.com/wyupdate/ - wyBuild isn't free, and the file specification is simply too complex. Maybe if I was under a company paying me I could spend the time, but then I may as well pay for wyBuild. http://www.kineticjump.com/update/default.aspx - Ditto. ClickOnce - Workarounds for implementing launching on startup are massive, horrendous and not worth it for such a simple feature. Publishing is a pain; manual FTP and replace of all files is required for servers without FrontPage Extensions. I'm pretty much ready to throw in the towel right now and strangle myself. And then I think about Sparkle...

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • How important is it to use SSL on every page of your website?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • WP7 - correctly timeing out an observable?

    - by Gaz83
    WP7 APP Using an observable from event, I download the lateset weather from a web service. I tested this out on the phone and emulator at home and it works fine. I brought the project with me to work and ran it using the emulator there. Now i'm not sure it its a firewall or what but it doesn't seem to get the weather, it just sits there for ever trying. So it got me thinking that if there was ever to happen on a phone then I need some kind of timeout in that if it can't get the weather in say 10 - 15 seconds then just give up. Here is the example code so far IObservable<IEvent<MyWeather.GetWeatherCompletedEventArgs>> observable = Observable.FromEvent<MyWeather.GetWeatherCompletedEventArgs>(Global.WeatherService, "MyWeather.GetWeatherCompleted").Take(1); observable.Subscribe(w => { if (w.EventArgs.Error == null) { // Do something with the weather } }); Global.WeatherService.GetWeatherAsync(location); How can I get this to time out safely after a given time if nothing is happening?

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • PHP class_exists always returns true

    - by Ali
    I have a PHP class that needs some pre-defined globals before the file is included: File: includes/Product.inc.php if (class_exists('Product')) { return; } // This class requires some predefined globals if ( !isset($gLogger) || !isset($db) || !isset($glob) ) { return; } class Product { ... } The above is included in other PHP files that need to use Product using require_once. Anyone who wants to use Product must however ensure those globals are available, at least that's the idea. I recently debugged an issue in a function within the Product class which was caused because $gLogger was null. The code requiring the above Product.inc.php had not bothered to create the $gLogger. So The question is how was this class ever included if $gLogger was null? I tried to debug the code (xdebug in NetBeans), put a breakpoint at the start of Product.inc.php to find out and every time it came to the if (class_exists('Product')) clause it would simply step in and return thus never getting to the global checks. So how was it ever included the first time? This is PHP 5.1+ running under MAMP (Apache/MySQL). I don't have any auto loaders defined.

    Read the article

  • Casting pointer to object to void * in C++

    - by JB
    I've been reading StackOverflow too much and started doubting all the code I've ever written, I keep thinking "Is that undefined behavour?" even in code that has been working for ages. So my question - Is it safe and well defined behavour to cast a pointer to an object (In this case abstract interface classes) to a void* and then later on cast them back to the original class and call method using them? I'm fully aware that the code that does this is probably awful. I wouldn't even consider writing it like this now (this is old code which I don't really want to change), so I'm not looking for a discussion of better ways to do this. I already know how to write it better if I ever did this again. But if it's actually broken to rely on this in C++ then I'll have to look at changing the code, if it's merely awful code then changing it won't be a priority. I would have had no doubts about something this simple a year or two ago but as my understanding of C++ increases I actually find I have more and more worries about code being safe under the standards even if it works perfectly well. Perhaps reading too much stack overflow is a bad thing for productivity sometimes :P

    Read the article

  • User Defined Conversions in C++

    - by wash
    Recently, I was browsing through my copy of the C++ Pocket Reference from O'Reilly Media, and I was surprised when I came across a brief section and example regarding user-defined conversion for user-defined types: #include <iostream> class account { private: double balance; public: account (double b) { balance = b; } operator double (void) { return balance; } }; int main (void) { account acc(100.0); double balance = acc; std::cout << balance << std::endl; return 0; } I've been programming in C++ for awhile, and this is the first time I've ever seen this sort of operator overloading. The book's description of this subject is somewhat brief, leaving me with a few unanswered questions about this feature: Is this a particularly obscure feature? As I said, I've been programming in C++ for awhile and this is the first time I've ever come across this. I haven't had much luck finding more in-depth material regarding this. Is this relatively portable? (I'm compiling on GCC 4.1) Can user-defined conversions to user defined types be done? e.g. operator std::string () { /* code */ }

    Read the article

  • Add Source file link to the default ASP.NET Server Error page?

    - by Max Schilling
    Has anyone ever thought to attempt to modify the default ASP.NET Server error page to provide a link BACK to the error source in Visual Studio? Consider the following standard error page in ASP.NET: Server Error in '/myproject' Application. Invalid object name 'usp_DoSomething'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'usp_DoSomething'. Source Error: Line 4323: cmd.CommandText = "usp_DoSomething"; Line 4324: Line 4325: using (var dr = cmd.ExecuteReader()) Line 4326: { Line 4327: if (dr != null) Source File: c:\development\myproject\myproject.components\providers\sql\sqldataprovider.cs Line: 4325 When an error like this is generated, the HTML has the source back to the file the error occurs in and the line number. Has anyone ever written or thought of writing some mechanism to turn the text into a link back to the error in Visual Studio? I've never seen anything that does it, but it just seems like it would be a helluva nice feature and I think about it in the back of my mind every time an error occurs when I have to manually go find it in the source. It would just be nice to be able to click a link to take me straight there. Anyone written any, or know of any solutions for this. I use Chrome or Firefox as my browsers of choice, but I'd even consider using IE again if someone found a plugin that did this. Thanks, Max

    Read the article

  • Is the below thread pool implementation correct(C#3.0)

    - by Newbie
    Hi Experts, For the first time ever I have implemented thread pooling and I found it to be working. But I am not very sure about the way I have done is the appropriate way it is supposed to be. Would you people mind in spending some valuable time to check and let me know if my approach is correct or not? If you people find that the approach is incorrect , could you please help me out in writing the correct version. I have basicaly read How to use thread pool and based on what ever I have understood I have developed the below program as per my need public class Calculation { #region Private variable declaration ManualResetEvent[] factorManualResetEvent = null; #endregion public void Compute() { factorManualResetEvent = new ManualResetEvent[2]; for (int i = 0; i < 2; i++){ factorManualResetEvent[i] = new ManualResetEvent(false); ThreadPool.QueueUserWorkItem(ThreadPoolCallback, i);} //Wait for all the threads to complete WaitHandle.WaitAll(factorManualResetEvent); //Proceed with the next task(s) NEXT_TASK_TO_BE_EXECUTED(); } #region Private Methods // Wrapper method for use with thread pool. public void ThreadPoolCallback(Object threadContext) { int threadIndex = (int)threadContext; Method1(); Method2(); factorManualResetEvent[threadIndex].Set(); } private void Method1 () { //Code of method 1} private void Method2 () { //Code of method 2 } #endregion }

    Read the article

  • Getting client denied when accessing a wsgi graphite script

    - by Dr BDO Adams
    I'm trying to set up graphite on my Mac OS X 10.7 lion, i've set up apache to call the python graphite script via WSGI, but when i try to access it, i get a forbiden from apache and in the error log. "client denied by server configuration: /opt/graphite/webapp/graphite.wsgi" I've checked that the scripts location is allowed in httpd.conf, and the permissions of the file, but they seem correct. What do i have to do to get access. Below is the httpd.conf, which is nearly the graphite example. <IfModule !wsgi_module.c> LoadModule wsgi_module modules/mod_wsgi.so </IfModule> WSGISocketPrefix /usr/local/apache/run/wigs <VirtualHost _default_:*> ServerName graphite DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} # XXX You will need to create this file! There is a graphite.wsgi.example # file in this directory that you can safely use, just copy it to graphite.wgsi WSGIScriptAlias / /opt/graphite/webapp/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you Alias /media/ "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. <Directory "/opt/graphite/webapp/"> Options +ExecCGI Order deny,allow Allow from all </Directory> </VirtualHost> Can you help?

    Read the article

  • How to ask memcached auth connection by sasl and pam?

    - by user199216
    I use memcached in a untrust network, so I try to use sasl and pam to auth connection to memcached. I installed sasl and pam module, compiled and installed memcached with sasl enabled. Also I created db and table for pam user. I run: $ sudo testsaslauthd -u tester -p abc123 -s /etc/pam.d/memcached 0: OK "Success." where the tester and abc123 is the authed user in db, which I inserted. But my python script cannot be authed, always authentication failed returned. It seems it dose not use pam to authentication, still use sasldb, because when I add user by: $ sudo saslpasswd2 -a memcached -c tester and input password: abc123, It can passed. Python script: client = bmemcached.Client(('localhost:11211'), 'tester', 'abc123') and error: bmemcached.exceptions.MemcachedException: Code: 32 Message: Auth failure. memcached log: authenticated() in cmd 0x21 is true mech: ``PLAIN'' with 14 bytes of data SASL (severity 2): Password verification failed sasl result code: -20 Unknown sasl response: -20 >30 Writing an error: Auth failure. >30 Writing bin response: no auth log found in: /var/log/auth.log Configurations: vi /etc/default/saslauthd MECHANISMS="pam" vi /etc/pam.d/memcached auth sufficient pam_mysql.so user=sasl passwd=abc123 host=localhost db=sasldb table=sasl_user usercolumn=user_name passwdcolumn=password crypt=0 sqllog=1 verbose=1 account required pam_mysql.so user=sasl passwd=abc123 host=localhost db=sasldb table=sasl_user usercolumn=user_name passwdcolumn=password crypt=0 sqllog=1 verbose=1 vi /etc/sasl2/memcached.conf pwcheck_method: saslauthd Do I make my question clear, english is not my native language, sorry! Any tips will be thankful!

    Read the article

  • Fedora 13 - No module named yum

    - by drozzy
    This is driving me bananas! After a recent update in Fedora 13 64bit, my yum is gone: $> yum update There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. I tried looking for an RPM yum package - to install yum. I went to the Fedora site: http://fedoraproject.org/wiki/Tools/yum Call me blind but I cannot find it anywhere on that page! Most of the solutions suggest repairing yum... with yum! But I don't have yum? Yum yum yum? :< Any help? Here are some outputs for rpm commands: $> rpm -ql python | grep "site-packages$" /usr/lib/python2.6/site-packages /usr/lib64/python2.6/site-packages $> rpm -ql yum | grep "site-packages/yum$" /usr/lib/python2.6/site-packages/yum

    Read the article

  • Apache error: could not make child process 25105 exit, attempting to continue anyway

    - by Temnovit
    Hello! I have a web server based on Ubuntu Server 9.10 with this software: apache 2 PHP 5.3 MySQL 5 Python 2.5 Few of my websites are PHP based, few use python/django through mod_wsgi. For month or so, every day my apache server stops responding until I manually restart it. Error logs show: [Fri Mar 05 17:06:47 2010] [error] could not make child process 25059 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25061 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 24930 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25084 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25105 exit, attempting to continue anyway and so on. I tried to google this problem but it seems, that I can't find a solution there. How can I determine the cause of this error and how do I fix it? Thank you for your help. UPDATE Updating mod-wsgi to version 3.1 didn't solve the problem Updating PHP to 5.3 also didn't solve it Here is a list of all installed modules: core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status mod_wsgi Here's how my virtual host with wsgi looks: <VirtualHost *:80> ServerName example.net DocumentRoot /var/www/example.net #wcgi script that serves all the thing WSGIScriptAlias / /var/www/example.net/index.wsgi WSGIDaemonProcess example user=wsgideamonuser group=root processes=1 threads=10 WSGIProcessGroup example Alias /static /var/www/example.net/static #serving admin files Alias /media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media/ <Location "/static"> SetHandler None </Location> <Location "/media"> SetHandler None </Location> ErrorLog /var/www/example.net/error.log </VirtualHost> Error log now contains two types of errors fallowed one by another: [error] child process 9486 still did not exit, sending a SIGKILL [error] could not make child process 9106 exit, attempting to continue anyway

    Read the article

  • cset as non-root to set cpu affinity for running processes

    - by RaveTheTadpole
    I've been playing with cset to set cpu affinity for running processes. I'm recreating the built-in "shield" function manually with set and proc, to add some subsets for specific threads of my application. I have a bash script that is calling cset to create the sets, and move the correct threads to the correct sets. It works when run with sudo. Now I'd like to make this script executable by another user, who does not have sudo powers. I trust this user enough to be responsible with cset, but don't want to open up the wide powers of root. I thought that CAP_SYS_NICE -- which is needed for sched_setaffinity, which I just assume cset must use -- on the script would be sufficient, but that didn't work. I tried extending CAP_SYS_NICE to the cset program (which is a thin python wrapper for the cset python library). No dice. The output of cap_to_text on my CAP_SYS_NICE'd scripts is "=cap_ipc_lock,cap_sys_nice,cap_sys_resource+eip" (it has ipc_lock and sys_resource for other reasons; I think only sys_nice is relevant). Any ideas?

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >