Search Results

Search found 686 results on 28 pages for 'hostile fork'.

Page 21/28 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Softpedia published some of my open source projects — how to react?

    - by polarblau
    (FYI: I've just moved this question over from Stackoverflow on recommendation.) I just received a few emails, informing me that softpedia.com has added some of my "products" to their "database of scripts, code snippets and web applications". My products are in this case some smaller open source projects, which I have hosted and published on github. Now I'm wondering how to react to this. This site is indirectly making money of my free work through ads on three pages before the actual download. They also seem to "invent" version numbers and I can't find out if they're hosting the latest or all versions of my projects. — I can see how this could lead to problems in the future, since I don't control what's "the latest" everywhere. On the other hand I don't mind some extra publicity. I want as many people as possible to know about the projects, use them, fork them and hopefully improve them. The projects in questions are really fairly small, but this might not be the case in the future for me and/or other people reading this question. I'm sure that this must have happened to others around here. What's your opinion? Should I try to get the downloads removed? Update 1 I've requested the removal and mentioned that I don't feel that Softpedia can provide the right environment for this kind of project. Their team got back to me instantly with a friendly email saying, that they'll remove the links for now: If you are worried that your projects won't be updated, then I must tell you that I have them bookmarked in my RSS reader, so any version changes will be forwarded to me when needed. So I promise I'll keep your script up to date as soon as I see an update in the repository. I have to say, that I appreciate this kind of reaction quite a lot and so I sent them another email, describing in more detail what I'm worried about and what bothers me. I also stated, that I'm aware that my license clearly permits them to host the projects in any case, but that I'd be even happy if they would host the projects as long as they could convince me of a few details and maybe make some small changes to the way the projects are represented. — Let's see where this goes. Update 2 After discussing with their contact and requesting some changes regarding display of version (they had given the possibility to do so) and authorship they put the projects back up on their site. All in all a positive and definitely interesting experience.

    Read the article

  • Can I safely delete the Ubuntu 12.04 partition and use the unallocated space for my Elementary OS?

    - by d4ryl3
    I have this setup: I've decided to switch to Elementary OS Luna (fork of Ubuntu 12.04) as my main Linux distro. Now I need to delete my Ubuntu partition so I could add capacity to my eOS which only has 10Gb. Currently my eOS is in /dev/sda9, and Ubuntu in /dev/sda8/. I forgot where my bootloader is installed, so I ran bootinfoscript, which returned this: `============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 94 for . sda1: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /Boot/BCD sda2: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /boot/bcd sda4: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________ File system: swap Boot sector type: - Boot sector info: sda7: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda7 and looks at sector 851823520 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Boot files: /grub/grub.cfg /extlinux/extlinux.conf sda8: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda8 and looks at sector 860224256 of the same hard drive for core.img. core.img is at this location and looks for (,msdos9)/boot/grub on this drive. Operating System: Ubuntu 13.04 Boot files: /etc/fstab sda9: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: elementary OS Luna Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img` I need advice as to how to proceed. I mean, could I simply delete /dev/sda7/ and /dev/sda8/? Please help, thank you.

    Read the article

  • Windows Azure Training Kit October 2012 Release

    - by Clint Edmonson
    The Windows Azure Technical Evangelism team have been busy bees lately and we want to share with you what they’ve been working on. As you know we release the Windows Azure Training Kit on a regular cadence, so I’m pleased to announce the Windows Azure Training Kit October 2012 Release. This update of the training kit includes 47 hands-on labs, 24 demos and 38 presentations designed to help you learn how to build applications that use Windows Azure services, including updated hands-on labs to use the latest version of Visual Studio 2012 and Windows 8, new demos and presentations. Essential Links: Windows Azure Training Kit Download Windows Azure Training Kit Github [Issues] Updated Presentations With Speaker Notes Your voices were heard loud and clear! I am excited to announce Speaker Notes have been added to a the majority of the content we have available. Find the new updated decks which contain speaker notes below: Foundation SQL Federation Virtual Machine Overview Virtual Networks Windows 8 and Windows Azure Web Sites Windows Azure Cloud Services Windows Azure Overview Windows Azure Service Bus Deploying Active Directory Building Apps With IaaS and PaaS Identity and Access Control Linux Virtual Machines Managing Virtual Machines PowerShell Migrating Apps and Workloads Scalable Global and Highly Available Apps Security and Identity SQL Database SQL Database Migration Cloud Service Life Cycle DevCamps Cloud Services iOS, Android and Windows Azure Windows 8 and Windows Azure Web Sites Windows 8 and Windows Azure Mobile Services Added Localized Content Due to the excitement in the community surrounding the mobile services launch, it was apparent that we needed to make localized content available to continue to deliver the exciting message around Windows Azure Mobile Services. Localized content is available in the following languages: French Japanese German Chinese (Taiwan) Spanish Italian Korean Portuguese (Brazilian) Russian Updated Hands-On Labs To support those who have upgraded to Visual Studio 2012 or those trying out the Visual Studio 2012 Express Editions, we have made sure that the content is available and supported (selected labs only) in Visual Studio 2012 Express and up. Visual Studio 2012 Windows Azure Traffic Manager Introduction to Cloud Services Service Bus Messaging Introduction to Access Control Service This adds a significant amount of additional content, so we have revamped the Hands-On Lab Navigation page to include subsections for Visual Studio 2012 Labs, Visual Studio 2010 Labs, Open Source Labs, Scenario Labs, All Labs. Added Demos Demos are available for a number of presentations which are available in Foundation, DevCamp, ITPro Event & Device + Service DevCamps. You can browse through the demos on the respective Demo Navigation page or on Github (links provided in Demo listing below). HelloASP Connecting Cloud Services Service Bus Relay Windows 8 and Mobile Services URL Shortener iOS Client Migrating a Web Farm Deploying Active Directory URL Shortener Service  (PHP) Geo-Location Service (PHP) Geo-Location Android Client Getting Started with VMs Load Balancing Availability Deploying Hybrid Apps Migrate VM AppController Geo-Location iOS Client Scale Up/Down Using CSUpload URL Shortener Android Client Imaging Virtual Machines The Windows Azure Training Kit is open source and available on GitHub, enabling you in the community to Report Issues or Fork and either extend the solution or commit bug fixes back to the Training Kit. You can find out more details about  the training kit from our GitHub Page including guidelines on how to commit back to the project. Stay tuned to my twitter feed for Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • What to do when opensource project starts to tear apart? (or a manager tries to write code and than shouts at the team)

    - by Kabumbus
    Imagine there is an open source cross-platform project on Google code. It has lots of revisions (1000). It concentrates in itself lots technological stuff - rare stuff - it mixes top tech. It contains server, and more than one client. The project was created by a well-connected team of developers (friends) and a manager that was sponsoring project at its start up during its first few months (project now is more than a year old-sponsoring oss project is a big good deal- also gave the idea of project to developers). The project was growing in complexity and effort reqiered to continue development. Once upon a time a manager - team leader started trying to write code (he was a programmer in some other projects - not the best, but he felt like he was one). He started because one of the developers suggested an idea at the team meeting and he felt he just needed to do it on his own. He failed, and he told the dev team about it. The dev team did what he failed to do in a few days. After that, the manager feels that team codes with out him perfectly and gets the job done in short time. He felt sorry and lost and he started to crash like an old bad PC. Firstly, he started to scream (in forms of messages not in voice) he tried to tell developers that what they were doing was a bad, not-needed thing - developers kindly told him that his "beginnings" were not compilable while dev team product worked as needed. He told the developers that all work they do should be firstly discussed with him. Here is the part where we need to mention that all team members are "project owners" and logically have equal rights. The team leader suggested to the developers these options: change their dev process to go through him, or be moved from project owners to contributers. So what are our options as developers? What arguments we can provide to the team leader/manager for him to calm down? Is it possible to save the project or is it better to fork out now? An important issue is that lately we had no active ticket system, and I personally think that this was the reason the mess appeared. So... any ideas?

    Read the article

  • C Minishell Command Expansion Printing Gibberish

    - by Optimus_Pwn
    I'm writing a unix minishell in C, and am at the point where I'm adding command expansion. What I mean by this is that I can nest commands in other commands, for example: $> echo hello $(echo world! ... $(echo and stuff)) hello world! ... and stuff I think I have it working mostly, however it isn't marking the end of the expanded string correctly, for example if I do: $> echo a $(echo b $(echo c)) a b c $> echo d $(echo e) d e c See it prints the c, even though I didn't ask it to. Here is my code: msh.c - http://pastebin.com/sd6DZYwB expand.c - http://pastebin.com/uLqvFGPw I have a more code, but there's a lot of it, and these are the parts that I'm having trouble with at the moment. I'll try to tell you the basic way I'm doing this. Main is in msh.c, here it gets a line of input from either the commandline or a shellfile, and then calls processline (char *line, int outFD, int waitFlag), where line is the line we just got, outFD is the file descriptor of the output file, and waitFlag tells us whether or not we should wait if we fork. When we call this from main we do it like this: processline (buffer, 1, 1); In processline, we allocate a new line: char expanded_line[EXPANDEDLEN]; We then call expand, in expand.c: expand(line, expanded_line, EXPANDEDLEN); In expand, we copy the characters literally from line to expanded_line until we find a $(, which then calls: static int expCmdOutput(char *orig, char *new, int *oldl_ind, int *newl_ind) orig is line, and new is expanded line. oldl_ind and newl_ind are the current positions in the line and expanded line, respectively. Then we pipe, and recursively call processline, passing it the nested command(for example, if we had "echo a $(echo b)", we would pass processline "echo b"). This is where I get confused, each time expand is called, is it allocating a new chunk of memory EXPANDEDLEN long? If so, this is bad because I'll run out of stack room really quickly(in the case of a hugely nested commandline input). In expand I insert a null character at the end of the expanded string, so why is it printing past it? If you guys need any more code, or explanations, just ask. Secondly, I put the code in pastebin because there's a ton of it, and in my experience people don't like it when I fill up several pages with code. Thanks.

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Summary of our Recent Pull Request Enhancements on CodePlex

    Over the past several weeks, we’ve been incrementally rolling out a bunch of enhancements around our pull request workflow for Git and Mercurial projects. Our goal is to make contributing to open source projects a simple and rewarding experience, and we’ll continue to invest in this area. Here’s a summary of the changes so far, in case you’ve missed them. As always, if you have any feedback, please let us know, whether on our ideas page or via Twitter. Support for branches You can now pick the source and destination branches for your pull request, whether you’re sending one from your fork, or using it within a project to collaborate with your other trusted contributors. A redesigned creation experience Our old pull request creation form was rather lacking. It asked for a title and comment in a small modal dialog, but that was about it. We knew we could do better, so we rethought the experience. Now, when you create a pull request, you’re taken to a new page that let’s you select the source and destination, and gives you information on the diffs and commits that you’re sending, so you can confirm that you’re sending the right set of changes. Inline code snippets in discussion If users comment on code in your pull request, we now display a preview of the snippet of relevant code inline with their comment on the discussion. Subsequent replies on that line are combined in a single thread to preserve your context. No more clicking and hunting to find where the comments are. And you can add another inline comment right from the discussion area. Comment notifications You can now elect receive an e-mail notification if a user comments on your pull request. If it’s on a line of code, we’ll display the relevant code snippet in the e-mail. Redesigned diff viewer Our old diff viewer hadn’t been touched in a while, and was in need of an update. We started with a visual facelift to use standard red/green colors for additions/deletions and remove the noisy “dots” that represented spaces and that littered the diff viewer. Based on feedback that the viewable region for diffs was too small, especially for smaller screen resolutions, we revamped the way the viewport for the code is sized, and now expand it to fill the majority of the browser height when scrolling down. The set of improvements we implemented here also apply anywhere diffs are viewed, not just for pull requests.

    Read the article

  • SSH error 114 when connect with FinalBuilder 7

    - by mamcx
    I'm testing FB 7 and try to connect to my Mac OS X Snow Leopard machine. I can connect with paramiko (python SSH library) but not FB7. The only thing I get is: SSH error encoutered: 114 I try stopping & restarting the share session on Mac OS X. update: I enable server debug and get this log: debug1: sshd version OpenSSH_5.2p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-Dd' debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: fd 5 clearing O_NONBLOCK debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 10.3.7.135 port 49457 debug1: Client protocol version 2.0; client software version SecureBlackbox.8 debug1: no match: SecureBlackbox.8 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: privsep_preauth: successfully loaded Seatbelt profile for unprivileged child debug1: permanently_set_uid: 75/75 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr [email protected] none debug1: kex: server->client aes128-ctr [email protected] none debug1: expecting SSH2_MSG_KEXDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user mamcx service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "mamcx" Connection closed by 10.3.7.135 debug1: do_cleanup debug1: PAM: setting PAM_RHOST to "10.3.7.135" debug1: do_cleanup debug1: PAM: cleanup debug1: audit_event: unhandled event 12

    Read the article

  • Do we need to explicitly pass php.ini's location to php-fpm?

    - by F21
    I am seeing a strange issue where my php.ini is not used if I do not explicitly pass it to php-fpm when starting it. This is the upstart script I am using: start on (filesystem and net-device-up IFACE=lo) stop on runlevel [016] pre-start script mkdir -p /run/php end script expect fork respawn exec /usr/local/php/sbin/php-fpm --fpm-config /etc/php/php-fpm.conf If PHP is started with the above, my php.ini is never used, even though it is in Configuration File (php.ini) Path. This is the relevant part from phpinfo(): Configuration File (php.ini) Path /etc/php/ Loaded Configuration File (none) Scan this dir for additional .ini files (none) Additional .ini files parsed (none) If I modify the last line of the upstart script to point php-fpm to php.ini explicitly: exec /usr/local/php/sbin/php-fpm --fpm-config /etc/php/php-fpm.conf -c /etc/php/php.ini Then we see that the php.ini is loaded: Configuration File (php.ini) Path /etc/php/ Loaded Configuration File /etc/php/php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Why is this the case? Is this a quirk in php-fpm? Minor update: This also seems to be a problem for php5-fpm installed using apt-get. I did a test install in a Ubuntu Server 12.04 virtual machine by running the following: sudo apt-get install nginx php5-fpm PHP-FPM and nginx were started after installation and everything seemed fine. I then uncommented php's settings in nginx's configuration and placed a test phpinfo() file to inspect PHP's settings. The relevant bits are: Configuration File (php.ini) Path /etc/php5/fpm Loaded Configuration File (none) Scan this dir for additional .ini files /etc/php5/fpm/conf.d Additional .ini files parsed /etc/php5/fpm/conf.d/10-pdo.ini I noted that no php.ini was loaded either. However, if I go to /etc/php5/fpm, I can see that a php.ini exists. I also checked the start up scripts for PHP-FPM and the -c parameter was not used to link the ini file to PHP. This can potentially be confusing for people who would expect php.ini to be loaded automatically by PHP-FPM.

    Read the article

  • Ant task to pre-compile JSPs on weblogic server

    - by user24560
    I am trying to create an ant task to compile JSPs. Here are the excerpts from the build.xml related to the task: .... <fileset dir="${java.home}/lib"> <include name="tools.jar"/> </fileset> <java classname="weblogic.jspc" fork="yes"> <classpath refid="weblogic.jsp.classpath" /> <sysproperty key="weblogic.jsp.windows.caseSensitive" value="false"/> <arg line="-forceGeneration -keepgenerated -compileAll -webapp ${jsp.src.dir} -d ${jsp.generated.src.dir}"/> </java> When I try to run wl.jsp.generate task, I get: wl.jsp.generate: [java] [jspc] warning: expected file /WEB-INF/web.xml not found, tag libraries cannot be resolved. [java] [jspc] Overriding default descriptor option 'keepgenerated' with value specified on command-line 'true' [java] Exception encountered while compiling C:\workspace\smcmw\smcmw_browser\jsp\smcesearchprogress.jsp [java] java.lang.NoSuchMethodError: javax.servlet.jsp.tagext.TagAttributeInfo.(Ljava/lang/String;ZLjava/lang/String;ZZLjava/lang/String;ZZLjava/lang/String;Ljava/lang/String;)V [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:64) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:57) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:41) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.read(TagAttrInfoEx.java:86) Looks like it fails because it can't find WEB-INF/web.xml file and tag libraries. How can I fix this?

    Read the article

  • upstart config to start sync daemon as non-root user

    - by Rudiger Wolf
    I am planning to use inosync to sync data from master server to several client servers. I have created a user called rsyncuser in both master and slaves with access permissions and passwordless ssh access from master to slave servers. Inosync is working when I use it from the command line as rsyncuser. Next I want this to start automatically when server is turned on. I figured upstart is the way to get this working. I am unable to find the right upstart command to get this working. Here is my upstart conf file. The problem seems to be around running "inosync -d -c /etc/inosync/inosync_rsyncuser.py" as a given user. As you can see I have tried a number of various options! description "start inosync to sync data to other CDN Servers as rsyncuser" console output #start on startup #stop on shutdown start on (net-device-up and local-filesystems) stop on runlevel [016] #start on runlevel [2345] #stop on runlevel [!2345] #kill timeout 30 env RUN_AS_USER=rsyncuser expect fork script echo "Inosync updtart job seems to have started" /tmp/upstart.log # exec sudo -u rsyncuser -c "ls -la" /tmp/upstart.log 2&1 # LOGFILE=/var/log/logfile.`date +%Y-%m-%d`.log # exec su - $RUN_AS_USER -c "inosync -d -c /etc/inosync/inosync_rsyncuser.py" $LOGFILE 2&1 # exec su -c "ls -la" /tmp/upstart.log 2&1 # emit inosync_running end script

    Read the article

  • Perl wrapper to start daemon leaves zombie when run by cron

    - by leonstr
    I've got a Perl script to start a process as a daemon. But when I call it from cron I'm left with a defunct process. I've stripped this down to a minimal script, I'm starting 'tail' as a placeholder for the daemon: use POSIX "setsid"; $SIG{CHLD} = 'IGNORE'; my $pid = fork(); exit(0) if ($pid > 0); (setsid() != -1) || die "Can't start a new session: $!"; open (STDIN, '/dev/null') or die ("Cannot read /dev/null: $!\n"); my $logout = "logger -t test"; open (STDOUT, "|$logout") or die ("Cannot pipe stdout to $logout: $!\n"); open (STDERR, "|$logout") or die ("Cannot pipe stderr to $logout: $!\n"); my $cmd = "tail -f"; exec($cmd); exit(1); I run this with cron and end up with: root 18616 18615 0 11:40 ? 00:00:00 [test.pl] <defunct> root 18617 1 0 11:40 ? 00:00:00 tail -f root 18618 18617 0 11:40 ? 00:00:00 logger -t test root 18619 18617 0 11:40 ? 00:00:00 logger -t test As far as I can tell it's the piping to logger that it doesn't like, if I send STDOUT and STDERR to /dev/null the problem doesn't occur. Am I doing something wrong or is this just not possible? (CentOS 5.8) Thanks, leonstr

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • Hosting django backend for iPhone / Android app

    - by Ashok Fernandez
    I am looking to make an iPhone / Android app for my university using the Appcelerator Titanium framework. The app will rely heavily on a server backend which will pull information from other sites, figuring out what is relevant to the user then deliver the content. Some of the information is individual to the user (calendar data), other bits are updates frequently but are shared (bus timetables) and others are static and the same for everyone (magazine articles). I was going to use django as I am fairly proficent in python so I thought it would save time. My question is, which hosting services do you recommend to host the server backend? I am expecting about 9000 people to use the app with very random spikes in traffic, but unfortunately I have very little to go on at this stage. I have heard a lot about Webfaction, is it suitable for something like this or am I likely to need something bigger? I don't really want to fork out for a VPS at this stage. What about Amazons EC2? Would that be more suitable than Webfaction? Sorry for the fairly open ended question, Im sort of new to this so I open to all suggestions.

    Read the article

  • Running $ORIGIN linked binaries from setuid scripts on linux

    - by drscroogemcduck
    I'm using suidperl to run some programs that require root permissions. however, the runtime linker won't expand library paths which contain $ORIGIN entries so the programs i want to run (jstack from java) won't run. more info here There is one exception to the advice to make heavy use of $ORIGIN. The runtime linker will not expand tokens like $ORIGIN for secure (setuid) applications. This should not be a problem in the vast majority of cases. my program looks something like this: #!/usr/bin/perl $ENV{PATH} = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.6.0_12/bin:/root/bin"; $ENV{JAVA_HOME} = "/usr/java/jdk1.6.0_12"; open(FILE, '/var/run/kil.pid'); $pid = <FILE>; close(FILE); chomp($pid); if ($pid =~ /^(\d+)/) { $pid = $1; } else { die 'nopid'; } system( "/usr/java/jdk1.6.0_12/bin/jstack", "$pid"); is there any way to fork off a child process in a way so that the linker will work correctly.

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Running $ORIGIN linked binaries from setuid scripts on linux

    - by drscroogemcduck
    I'm using suidperl to run some programs that require root permissions. however, the runtime linker won't expand library paths which contain $ORIGIN entries so the programs i want to run (jstack from java) won't run. more info here There is one exception to the advice to make heavy use of $ORIGIN. The runtime linker will not expand tokens like $ORIGIN for secure (setuid) applications. This should not be a problem in the vast majority of cases. my program looks something like this: #!/usr/bin/perl $ENV{PATH} = "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.6.0_12/bin:/root/bin"; $ENV{JAVA_HOME} = "/usr/java/jdk1.6.0_12"; open(FILE, '/var/run/kil.pid'); $pid = <FILE>; close(FILE); chomp($pid); if ($pid =~ /^(\d+)/) { $pid = $1; } else { die 'nopid'; } system( "/usr/java/jdk1.6.0_12/bin/jstack", "$pid"); is there any way to fork off a child process in a way so that the linker will work correctly.

    Read the article

  • MongoDB data directory transfer and upgrade

    - by KPL
    I just transferred my data directory (of Mongo 1.6.5) to a new server and installed Mongo 2.0 on it. I set the data directory path and did sudo server mongod restart. It failed, and the log file output says this - ***** SERVER RESTARTED ***** Sun Oct 9 07:51:47 [initandlisten] MongoDB starting : pid=8224 port=27017 dbpath=/database/mongodb 64-bit host=domU-12-31-39-09-35-81 Sun Oct 9 07:51:47 [initandlisten] db version v2.0.0, pdfile version 4.5 Sun Oct 9 07:51:47 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222 Sun Oct 9 07:51:47 [initandlisten] build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41 Sun Oct 9 07:51:47 [initandlisten] options: { auth: "true", config: "/etc/mongod.conf", dbpath: "/database/mongodb", fork: "true", logappend: "true", logpath: "/var/log/mongo/mongod.log", nojournal: "true" } Sun Oct 9 07:51:47 [initandlisten] couldn't open /database/mongodb/local.ns errno:1 Operation not permitted Sun Oct 9 07:51:47 [initandlisten] error couldn't open file /database/mongodb/local.ns terminating Sun Oct 9 07:51:47 dbexit: Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close listening sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to flush diaglog... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: waiting for fs preallocator... Sun Oct 9 07:51:47 [initandlisten] shutdown: closing all files... Sun Oct 9 07:51:47 [initandlisten] closeAllFiles() finished Sun Oct 9 07:51:47 [initandlisten] shutdown: removing fs lock... Sun Oct 9 07:51:47 dbexit: really exiting now I have already run it with --upgrade once.

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • Can't seem to get C TCP Server-Client Communications Right

    - by Zeesponge
    Ok i need some serious help here. I have to make a TCP Server Client. When the Client connects to server using a three stage handshake. AFterwards... while the Client is running in the terminal, the user enters linux shell commands like xinput list, ls -1, ect... something that uses standard output. The server accepts the commands and uses system() (in a fork() in an infinite loop) to run the commands and the standard output is redirected to the client, where the client prints out each line. Afterward the server sends a completion signal of "\377\n". In which the client goes back to the command prompt asking for a new command and closes its connection and exit()'s when inputting "quit". I know that you have to dup2() both the STDOUT_FILENO and STDERR_FILENO to the clients file descriptor {dup2(client_FD, STDOUT_FILENO). Everything works accept when it comes for the client to retrieve system()'s stdout and printing it out... all i get is a blank line with a blinking cursor (client waiting on stdin). I tried all kinds of different routes with no avail... If anyone can help out i would greatly appreciate it TCP SERVER CODE include #include <sys/socket.h> #include <stdio.h> #include <string.h> #include <netinet/in.h> #include <signal.h> #include <unistd.h> #include <stdlib.h> #include <errno.h> //Prototype void handle_client(int connect_fd); int main() { int server_sockfd, client_sockfd; socklen_t server_len, client_len; struct sockaddr_in server_address; struct sockaddr_in client_address; server_sockfd = socket(AF_INET, SOCK_STREAM, 0); server_address.sin_family = AF_INET; server_address.sin_addr.s_addr = htonl(INADDR_ANY); server_address.sin_port = htons(9734); server_len = sizeof(server_address); bind(server_sockfd, (struct sockaddr *)&server_address, server_len); /* Create a connection queue, ignore child exit details and wait for clients. */ listen(server_sockfd, 10); signal(SIGCHLD, SIG_IGN); while(1) { printf("server waiting\n"); client_len = sizeof(client_address); client_sockfd = accept(server_sockfd, (struct sockaddr *)&client_address, &client_len); if(fork() == 0) handle_client(client_sockfd); else close(client_sockfd); } } void handle_client(int connect_fd) { const char* remsh = "<remsh>\n"; const char* ready = "<ready>\n"; const char* ok = "<ok>\n"; const char* command = "<command>\n"; const char* complete = "<\377\n"; const char* shared_secret = "<shapoopi>\n"; static char server_msg[201]; static char client_msg[201]; static char commands[201]; int sys_return; //memset client_msg, server_msg, commands memset(&client_msg, 0, sizeof(client_msg)); memset(&server_msg, 0, sizeof(client_msg)); memset(&commands, 0, sizeof(commands)); //read remsh from client read(connect_fd, &client_msg, 200); //check remsh validity from client if(strcmp(client_msg, remsh) != 0) { errno++; perror("Error Establishing Handshake"); close(connect_fd); exit(1); } //memset client_msg memset(&client_msg, 0, sizeof(client_msg)); //write remsh to client write(connect_fd, remsh, strlen(remsh)); //read shared_secret from client read(connect_fd, &client_msg, 200); //check shared_secret validity from client if(strcmp(client_msg, shared_secret) != 0) { errno++; perror("Invalid Security Passphrase"); write(connect_fd, "no", 2); close(connect_fd); exit(1); } //memset client_msg memset(&client_msg, 0, sizeof(client_msg)); //write ok to client write(connect_fd, ok, strlen(ok)); // dup2 STDOUT_FILENO <= client fd, STDERR_FILENO <= client fd dup2(connect_fd, STDOUT_FILENO); dup2(connect_fd, STDERR_FILENO); //begin while... while read (client_msg) from server and >0 while(read(connect_fd, &client_msg, 200) > 0) { //check command validity from client if(strcmp(client_msg, command) != 0) { errno++; perror("Error, unable to retrieve data"); close(connect_fd); exit(1); } //memset client_msg memset(&client_msg, 0, sizeof(client_msg)); //write ready to client write(connect_fd, ready, strlen(ready)); //read commands from client read(connect_fd, &commands, 200); //run commands using system( ) sys_return = system(commands); //check success of system( ) if(sys_return < 0) { perror("Invalid Commands"); errno++; } //memset commands memset(commands, 0, sizeof(commands)); //write complete to client write(connect_fd, complete, sizeof(complete)); } } TCP CLIENT CODE #include <sys/types.h> #include <sys/socket.h> #include <stdio.h> #include <string.h> #include <netinet/in.h> #include <arpa/inet.h> #include <unistd.h> #include <stdlib.h> #include <errno.h> #include "readline.c" int main(int argc, char *argv[]) { int sockfd; int len; struct sockaddr_in address; int result; const char* remsh = "<remsh>\n"; const char* ready = "<ready>\n"; const char* ok = "<ok>\n"; const char* command = "<command>\n"; const char* complete = "<\377\n"; const char* shared_secret = "<shapoopi>\n"; static char server_msg[201]; static char client_msg[201]; memset(&client_msg, 0, sizeof(client_msg)); memset(&server_msg, 0, sizeof(server_msg)); /* Create a socket for the client. */ sockfd = socket(AF_INET, SOCK_STREAM, 0); /* Name the socket, as agreed with the server. */ memset(&address, 0, sizeof(address)); address.sin_family = AF_INET; address.sin_addr.s_addr = inet_addr(argv[1]); address.sin_port = htons(9734); len = sizeof(address); /* Now connect our socket to the server's socket. */ result = connect(sockfd, (struct sockaddr *)&address, len); if(result == -1) { perror("ACCESS DENIED"); exit(1); } //write remsh to server write(sockfd, remsh, strlen(remsh)); //read remsh from server read(sockfd, &server_msg, 200); //check remsh validity from server if(strcmp(server_msg, remsh) != 0) { errno++; perror("Error Establishing Initial Handshake"); close(sockfd); exit(1); } //memset server_msg memset(&server_msg, 0, sizeof(server_msg)); //write shared secret text to server write(sockfd, shared_secret, strlen(shared_secret)); //read ok from server read(sockfd, &server_msg, 200); //check ok velidity from server if(strcmp(server_msg, ok) != 0 ) { errno++; perror("Incorrect security phrase"); close(sockfd); exit(1); } //? dup2 STDIN_FILENO = server socket fd? //dup2(sockfd, STDIN_FILENO); //begin while(1)/////////////////////////////////////// while(1){ //memset both msg arrays memset(&client_msg, 0, sizeof(client_msg)); memset(&server_msg, 0, sizeof(server_msg)); //print Enter Command, scan input, fflush to stdout printf("<<Enter Command>> "); scanf("%s", client_msg); fflush(stdout); //check quit input, if true close and exit successfully if(strcmp(client_msg, "quit") == 0) { printf("Exiting\n"); close(sockfd); exit(EXIT_SUCCESS); } //write command to server write(sockfd, command, strlen(command)); //read ready from server read(sockfd, &server_msg, 200); //check ready validity from server if(strcmp(server_msg, ready) != 0) { errno++; perror("Failed Server Communications"); close(sockfd); exit(1); } //memset server_msg memset(&server_msg, 0, sizeof(server_msg)); //begin looping and retrieving from stdin, //break loop at EOF or complete while((read(sockfd, server_msg, 200) != 0) && (strcmp(server_msg, complete) != 0)) { //while((fgets(server_msg, 4096, stdin) != EOF) || (strcmp(server_msg, complete) == 0)) { printf("%s", server_msg); memset(&server_msg, 0, sizeof(server_msg)); } } }

    Read the article

  • PC dies when running at 100% CPU

    - by user155631
    I recently wrote some Java code to generate images of the Mandelbrot set (fractal). I made use of the new Fork/Join facility in Java 7 to run separate threads on all four cores (2 real, 2 virtual)simultaneously, using a large number of iterations for greater accuracy. The problem is, the process runs fine for about a minute, and then it's as if someone has pulled the plug and the PC just dies. I thought it must be the CPUs overheating, so I ran Real Temp to monitor the temperature. It's an Intel i3 processor. I can see the temperature creeping up to 70 degrees, and then it seems to level off there and run for about another 30 seconds before dying. According to Real Temp, there's still a gap of 35 degrees between the actual temperature and TJ max. I also tried disabling "CPU TM function" in the BIOS, but the problem still occurs. A colleague suggested that it might be a power supply problem, so I borrowed a more powerful PSU (can't remember what wattage it was, but it's higher than mine which is 500W). The exact same thing still happens though. Is anyone able to suggest what the problem might be, or what I can try next?

    Read the article

  • 'Future-proof' Live Audio Capture & Broadcast [migrated]

    - by maxpowers
    I'm looking to implement some live audio broadcasting functionality within a Ruby on Rails site for a client and was hoping I could get some input from people who have tackled this type of thing before. Essentially what I need to do is capture and record a user's audio (via microhpone, line in, etc), then stream that to 1,000+ listeners with very little latency, like sub 2 second if possible. So it looks like we've got 3 parts: Web-based audio capture (likely with Flash or JS) Server to accept audio feed and stream to listeners (likely Icecast or Wowza) Actual audio player (maybe HTML5 w/ Flash as a fallback? Maybe this jPlayer fork) Does RTMP makes sense here? Or maybe HTTP? What's the most 'future-proof' way to make this happen? Building with mobile in mind, but still want to be able stream to anyone. I've found lots of potentially helpful threads and software but I'm struggling to get an idea of how it all fits together. I'm a front end guy and way out of my comfort zone so if anyone has insights to offer, I'd love to hear them.

    Read the article

  • Redis connection issue

    - by mre
    We are currently experiencing a lot of Redis errors with the message Unable to connect: read error on connection, trying next server We run Redis on FreeBSD using PHP Redis and we have a hard time reproducing the error on Ubuntu so this might be a hint. There's a long-running issue on that topic on github. Basically we get a socket from the operating system with a call to connect(host, port, timeout) in phpredis, but when we do a select(db_index) afterwards, we get an exception. Could there be an issue with persistance? I assume that connect does nothing in the background and select tries to access the connection, which is actually closed. We don't run into a timeout. We tried tuning TIME_WAIT without success. Any other ideas on where the problem might come from? What is the best way to track the issue down? dtrace maybe? Update We are currently looking into our BGSAVE settings. Interestingly it takes half a second and more to create a fork for the process which regularly writes the data to disk (persistence) and maybe redis can't respond to connect() requests during that timespan.

    Read the article

  • Reverse SSH tunnel: how can I send my port number to the server?

    - by Tom
    I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 [email protected] In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so that the output of the actual ssh command is lost in the ether. Furthermore, I can't think of any other way to get the remote port from Client. Thus, I'm wondering if there is a way to determine this port on Server. One idea I have is to somehow use /etc/sshrc, which is supposedly a script that runs for every connection. However, I don't know how one would get the pertinent information here (perhaps the PID of the particular sshd process handling that connection?) I'd love some pointers. Thanks!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >