Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 381/1051 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Is there a way to get docky to launch a new instance?

    - by Matt Briggs
    So i'm really loving the whole gnome-do/docky thing. My question is that on other docks, you can hold down a modifier key to launch a new instance rather then switching to an already opened instance of an app. So lets say I have chrome on the win7 dock first click launches chrome other clicks will focus the opened window shift click will open a new instance of chrome ctrl-shift-click will launch a new instance as admin is there anything similar in docky?

    Read the article

  • Missing dependency when trying to update

    - by ant2009
    Hello, Fedora 12 2.6.32.9-67.fc12.i686 I have tried doing the recommended as its saids at the bottom. However, that didn't work. So I have to yum upgrade --skip-broken Does anyone know how to solve this problem? Many thanks nss-3.12.6-1.2.fc12.i686 from updates has depsolving problems --> Missing Dependency: nspr >= 4.8.4 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) nss-3.12.6-1.2.fc12.i686 from updates has depsolving problems --> Missing Dependency: nss-util = 3.12.6 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) Error: Missing Dependency: nspr >= 4.8.4 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) Error: Missing Dependency: nss-util = 3.12.6 is needed by package nss-3.12.6-1.2.fc12.i686 (updates) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest

    Read the article

  • Problems getting Cron to run processes tagged @reboot for LDAP users

    - by Ben Torell
    I have a lab of computers running Ubuntu 9.10. Most of the people who log on to these computers are users from an LDAP server, and not local users. We discovered that if an LDAP user has a crontab with an entry marked to be run @reboot, the command will not actually run upon the reboot of a machine. I'm pretty sure that this is because the cron daemon starts before networking is fully up, so the crontabs of any LDAP users aren't loaded and run or checked for @reboot. In fact, cron will ignore LDAP users' crontabs entirely after a reboot until that user runs crontab -e again and saves, or until the cron daemon is rebooted. We were able to fix one part of this problem by adding the following line to /etc/crontab: @reboot root /bin/sleep 45 && /etc/init.d/cron restart Thus, when cron starts back up upon a reboot, it waits for networking to get up, then restarts the cron daemon. That fixes the problem of crontabs not being read at all for LDAP users. However, since it's the cron daemon being restarted and not the computer, @reboot entries are ignored. Is there a way for a user to make a command run upon restarting the daemon, rather than a reboot? Or is there a better solution to this overall problem? Thanks.

    Read the article

  • cloud computing ? Eucalyptus

    - by neolix
    Hi Greeting!! I want to setup small cloud computing using our old 2 core server system? we are new to cloud system we have google for the same. We are looking host VM's on top any one has done pls share me doc or how to ? we have 50 plus server which we are not using. 2 core each 4GB RAM, 1TB HDD centos is my base os we looking host windows. Right now we can use this server only paravirtualization ignore my english Thanks

    Read the article

  • cannot print from flash-player plugin

    - by eleven81
    I am running flash-player plug-in 10.0.32.10 inside of Firefox on a SLED 11 machine. Firefox can print to the network printer without issue from File Print. However, I cannot get the flash-player plugin to print at all. The print dialog comes up, asks for which printer, and which pages. I click Print and it was as if I had pressed cancel. Is this a known issue?

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • *nix OS that is easy to update to latest software

    - by rjstelling
    I need to configure a server (*nix) that runs our (bespoke) CMS and Applications. In the past I have defaulted to using Cent OS 5, but I find this outdated difficult to upgrade the software to the versions we require. For example, we need PHP 5.3, but CentOS 5 has 5.2. Updating is fine but breaks something else (normally MySQL support in PHP). Eventually it will get to a situation where I can't upgrade because of missing dependancies and incompatible versions. Error: Missing Dependency: httpd = 2.2.3-43.el5.centos.3 is needed by package httpd-devel-2.2.3-43.el5.centos.3.i386 (updates) Is there a better alternative OS for hassle free updates, I need: Apache 2.2.17 (the development version for apxs) MySQL 5.5.8 PHP 5.3.5

    Read the article

  • How restore back up email files in qmail

    - by Maysam
    I have problem with restoring some old backup mail files in a mail server that uses qmail. The problem is, when I copy a new email file to the /cur directory, the number of emails in front of inbox increases, but when I click on the inbox, I don't see the newly copied email. I can only see the old emails. I also deleted maildirsize and courierimapuiddb files and they where automatically created again, but it didn't help and I cannot still see the email in my inbox. Is there something I am missing? How can I restore the backed up email files? Please note that when I copy the email files in /.sent-mail/cur directory, they are all displayed in my sent box, but that doesn't happen for inbox files in /cur directory.

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • How to fill in the network line in the ubuntu interfaces config file?

    - by matnagel
    I have to configure an ubuntu hardy server network interface. The service hoster told me that this is the network data for the machine: IP Range: 111.111.200.74 to 111.111.200.78 Netmask: 255.255.255.248 Broadcast: 111.111.200.79 Gateway: 111.111.200.73 Subnet: 111.111.200.72/29 I am only using the first IP address. I will update the /etc/hosts file with 111.111.200.74, but I am still unsure how the /etc/network/interfaces file should be. This is my plan: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 111.111.200.74 netmask 255.255.255.248 network 111.111.200.??? broadcast 111.111.200.79 gateway 111.111.200.73 As you can see I don't know how to build the network line. How would I calculate the data for the network line and what is the result? (I changed the first 2 octets of the subnet, they are not "111.111" in the real setup.)

    Read the article

  • Ubuntu 9.10 X Stuck in restart loop - I think...

    - by widgisoft
    Trying out Ubuntu, installation went fine - upgraded to the proprietary nVidia drivers but on restart I get a login prompt and the screen is flashing really fast almost as if Xserver is trying to start and failing, I can type when the screen isn't in a "flash" as it were and it's so fast and random it's hard to even type a login name without it missing some characters - this makes typing a password (i.e. not being able to see which characters made it or not) very hard. I can log back into the live cd and alter my settings - but I can't even find out how to stop X stop starting on boot; Looks like they've moved everything around :-p I'd like to: Stop X from crashing and going insane (if it is actually Xserver) Know how to stop X from starting on bootup, Looks like interactive boot is also off by default now Update: A temporary work around seems to be enabling ssh and just connecting to the box over the network - ssh seems to work fine :-p Cheers, Chris

    Read the article

  • With CentOS 6 and LXC, "ifconfig" is unable to see network interface (but busybox "ifconfig" works fine)

    - by larsks
    I've just started working with LXC under CentOS 6 (via the libvirt adapter). If I create an LXC container, I'm unable to see any network interfaces when using the native system tools: # ifconfig -a # The behavior is very odd; specifying an interface by names yields neither the expected output nor an error message. This is true even for clearly invalid interface names, like this: # ifconfig foo # The ip command exhibits the same behavior. On the other hand, if I use "ifconfig" provided by busybox, everything works as expected: # busybox ifconfig -a eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:268 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17814 (17.3 KiB) TX bytes:552 (552.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So...what does busybox know that the native tools don't? The libvirt config for this environment is pretty standard; the network definition looks like this: <interface type='network'> <mac address='52:54:00:e0:12:c8'/> <source network='default'/> <target dev='veth0'/> </interface> The full configuration is here if you think it might help. I'm running: lxc-0.7.2-2.el6.x86_64 kernel-2.6.32-71.29.1.el6.x86_64 EDIT Weirder and weirder...it's a display issue, not a functionality issue. I can see the output of ifconfig if I pipe it into anything, so for example: # ifconfig eth0 | cat eth0 Link encap:Ethernet HWaddr 52:54:00:E0:12:C8 inet addr:192.168.10.10 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fee0:12c8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:573 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37914 (37.0 KiB) TX bytes:552 (552.0 b) And in fact even when not piping the output, strace shows that ifconfig is in fact writing the output to file descriptor 1 (aka stdout), so it's not clear why no output is actually showing up. This could be either an LXC or a virsh issue, I guess.

    Read the article

  • How hard for a Software Developer to Maintain a Server

    - by Samy
    I'm a software developer and don't have much experience as a sysadmin. I developed a web app and was considering buying a server and hosting the web app on it. Is this a huge undertaking for a web developer? What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff. I'm a single user, and not planning to sell the service to others. Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

    Read the article

  • what is ranlib?

    - by Ying
    I have been using a MacOSX system for a while, but just only recently started poking into the guts. I found a guide telling me to run 'sudo ranlib /usr/local/lib/libjpeg.a'(installing libjpeg). I have read the ranlib manual, and tried looking online on it. I simply don't understand. What resources do I need to look up to learn more, or can someone give a concise explanation on its use? Thanks in advance!

    Read the article

  • Is it possible to use SELinux MCS permissions with Samba?

    - by Yuri
    Created a user1: adduser --shell /sbin/nologin --no-create-home user1 passwd user1 smbpasswd -a user1 smbpasswd -e user1 semanage login -a -s "unconfined_u" -r "s0-s0:c0" user1 Added a category c0 for the folder ./123 inside the Samba share chcat s0:c0 /share/123/ After that the user1 can't go into this folder: type=AVC msg=audit(1332693158.129:48): avc: denied { read } for pid=1122 comm="smbd" name="123" dev=sda1 ino=786438 scontext=system_u:system_r:smbd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0:c0 tclass=dir But if remove the c0 category: restorecon -v /share/123/ user1 opens folder with no problem. Is I'm doing something wrong or Samba doesn't support SELinux MCS? Have installed on CentOS 6.2 are: samba3.i686 3.6.3-44.el6 @sernet-samba selinux-policy.noarch 3.7.19-126.el6_2.10 @updates selinux-policy-targeted.noarch 3.7.19-126.el6_2.10 @updates

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • Hugepages not utilized by MySQL 5.0, CentOS 5

    - by TechZilla
    I've set up Hugepages, but i'm not seeing any of them reserved. Have I missed a step, or for some particular reason, is MySQL is unable to utilize the Hugepages? I have not created a mount of hugetlbfs, although from what I read, MySQL would not call pages in such a manner. If I'm wrong, please let me know, as that would be a trivial solution. Almost all my MySQL tables are using InnoDB. NOTE: I created a hugetlbfs, no change as expected. Is it possible that rebooting would rectify this situation? I would not want to go through the procedure, as this is high availability, but would do so if necessary. This is the configurations, which I believe are relevant. /etc/sysctl.conf ... ## Huge Pages vm.nr_hugepages = 4096 vm.hugetlb_shm_group = 27 ## SHM kernel.shmmax = 34359738368 kernel.shmall = 8589934592 ... /etc/security/limits.conf ... mysql soft nofile 12888 mysql hard nofile 51552 @mysql soft memlock unlimited @mysql hard memlock unlimited /etc/my.cnf [mysqld] large-pages ... grep Huge /proc/meminfo HugePages_Total: 4096 HugePages_Free: 4096 HugePages_Rsvd: 0 Hugepagesize: 2048 kB id mysql uid=27(mysql) gid=27(mysql) groups=27(mysql) context=root:system_r:unconfined_t:SystemLow-SystemHigh tail -6 /var/log/mysqld.log InnoDB: HugeTLB: Warning: Failed to allocate 1342193664 bytes. errno 12 InnoDB HugeTLB: Warning: Using conventional memory pool 120808 15:49:25 InnoDB: Started; log sequence number 0 1729804158 120808 15:49:25 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution I would really appreciate any help, I'm completely out of ideas. If I missed any more relevant configs, or diagnostics, please comment and I'll add it to the question.

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

  • Difference between tcp recv buffer and tcp receive window size?

    - by pradeepchhetri
    The command shows the tcp receive buffer size in bytes. $ cat /proc/sys/net/ipv4/tcp_rmem 4096 87380 4001344 where the three values signifies the min, default and max values respectively. Then I tried to find the tcp window size using tcpdump command. $ sudo tcpdump -n -i eth0 'tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn and port 80 and host google.com' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 16:15:41.465037 IP 172.16.31.141.51614 > 74.125.236.73.80: Flags [S], seq 3661804272, win 14600, options [mss 1460,sackOK,TS val 4452053 ecr 0,nop,wscale 6], length 0 I got the window size to be 14600 which is 10 times the size of MSS. Can anyone please tell me the relationship between the two.

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

  • Netcat file transfer problem

    - by thepurplepixel
    I have two custom scripts I just wrote to facilitate transferring files between my VPS and my home server. They are both written in bash (short & sweet): To send: #!/bin/bash SENDFILE=$1 PORT=$2 HOST='<my house>' HOSTIP=`host $HOST | grep "has address" | cut --delimiter=" " -f 4` echo Transferring file \"$SENDFILE\" to $HOST \($HOSTIP\). tar -c "$SENDFILE" | pv -c -N tar -i 0.5 | lzma -z -c -6 | pv -c -N lzma -i 0.5 | nc -q 1 $HOSTIP $PORT echo Done. To receive: #!/bin/bash SERVER='<myserver>' SERVERIP=`host $SERVER | grep "has address" | cut --delimiter=" " -f 4` PORT=$1 echo Receiving file from $SERVER \($SERVERIP\) on port $PORT. nc -l $PORT | pv -c -N netcat -i 0.5 | lzma -d -c | pv -c -N lzma -i 0.5 | tar -xf - echo Done. The problem is that, for a very quick second, I see something flash along the lines of "Connection Refused" (before pv overwrites it), and no file is ever transferred. The port is forwarded through my router, and nmap confirms it: ~$ sudo nmap -sU -PN -p55515 -v <my house> Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-21 18:10 EDT NSE: Loaded 0 scripts for scanning. Initiating Parallel DNS resolution of 1 host. at 18:10 Completed Parallel DNS resolution of 1 host. at 18:10, 0.00s elapsed Initiating UDP Scan at 18:10 Scanning 74.13.25.94 [1 port] Completed UDP Scan at 18:10, 2.02s elapsed (1 total ports) Host 74.13.25.94 is up. Interesting ports on 74.13.25.94: PORT STATE SERVICE 55515/udp open|filtered unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds Raw packets sent: 2 (56B) | Rcvd: 5 (260B) Also, running netcat normally doesn't work either: squircle@summit:~$ netcat <my house> 55515 <my house> [<my IP>] 55515 (?) : Connection refused Both boxes are Ubuntu Karmic (9.10). The receiver has no firewall, and outbound traffic on that port is allowed on the sender. I have no idea what to troubleshoot next. Any ideas? P.S.: Feel free to move this to SO/SF if you feel it would fit better there.

    Read the article

  • Apache error "child pid XXXX exit signal exceeded file size limit (25)"

    - by Stephen Melrose
    Morning all, Apache on our internal development server stopped working last night. It's running, but all we get is a blank screen, no server errors. Examing the error log shows the following, [Fri Apr 23 09:13:57 2010] [notice] child pid XXXX exit signal exceeded file size limit (25) [Fri Apr 23 09:14:03 2010] [notice] child pid XXXX exit signal exceeded file size limit (25) [Fri Apr 23 09:14:03 2010] [notice] child pid XXXX exit signal exceeded file size limit (25) [Fri Apr 23 09:14:06 2010] [notice] child pid XXXX exit signal exceeded file size limit (25) After some Googling, we found that this is due to Apache trying to handle a file greater than it's maximum allowed limit, which by default is 2GB and is usually an error log. I did a search using find . -size +1000000k -ls (find all files greater than 1GB) in our log and web folders, but nothing showed up. I've also restarted Apache and rebooted the server itself serveral times. I've completely wiped the log folder and started a fresh. Nothing is working. Any ideas as to what else might be causing this? Thank you

    Read the article

  • Mounting Google Driva via WebDav directly on Google

    - by WoJ
    I would like to mount on my RPi my Google Drive using davfs2 but I did not find any direct way to do it for Google Drive. There are instructions on how to use dav-pocket to indirectly do that but these are from 2010. Google group discussions about the lack of direct WebDAV access to Google are roughly from the same time and I could not find any other way to do the mount. Has anything changed and would anyone know if Google enabled WebDAV - and if so what is the URL? An alternate synchronization system would be fine as well (rsync for instance) - I did not find any particular infos either Thank you!

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >