my domain is on one server and i wanted to put wildcard dns settings for another server.
basically i have two servers and i want all the wildcard to go on second server, how can i do that?
I am trying to build a Python module (pyfits) but I get the following error:
# python setup.py install
/home/steve/src/pyfits-2.2.2/stsci_distutils_hack.py:239: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module.
(sin, sout, serr) = os.popen3(cmd)
running install
error: invalid Python installation: unable to open /usr/lib64/python2.6/config/Makefile (No such file or directory)
I get the same error when I try and build other modules so my guess is I am missing a Python development library. I am running Mandriva 2010.0, any suggestions?
I have a Red Hat 5.8 server that is bound to active directory and users are authenticated via active directory when they log in via sftp. User home folders are created during login using /etc/pam.d/system-auth. The specific line that creates the home folder is
session optional pam_mkhomedir.so skel=/etc/skel/ umask=0066
This correctly gives home folders 711 permissions so no one else can read their directories. The problem is, the pam_mkhomedir.so also modifies permissions on all folders/files inside the /etc/skel folder which I don't want. There is a public_html folder (for apache) which needs to have 755 permissions so users can create web pages.
Is there a way for me to either a) stop pam_mkhomedir.so from recursively changing all the file permissions or b) create a script that creates the public_html folder after skel is copied and to set the correct permissions?
I have somehow managed to write an iso 9660 image onto my USB drive, which makes all my computer think that the device is actually a CD. I have tried various methods of removing this partition, but nothing seems to work. I have tried fdisk, which says $ fdisk -l /dev/sdb
Cannot open /dev/sdb
parted crashes when I try to use it on this device.
I have even tried $ dd if=/dev/zero of=/dev/sdb but it just hangs with no output (either on screen or on disk). However, when I plug the USB in, it does mount, and I can view (but not edit) the files on it.
edit: now the result is $ dd if=/dev/zero of=/dev/sdb
dd: opening `/dev/sdb': Read-only file system
I have also tried re-formatting it on Windows, but it gets to the end of the format process and then says "Couldn't format the drive".
How can I remove this partition and get my whole USB drive back to normal again?
EDIT 1: Trying a simple mkfs doesn't work: $ sudo mkfs -t vfat /dev/sdb
mkfs.vfat 3.0.0 (28 Sep 2008)
mkfs.vfat: Will not try to make filesystem on full-disk device '/dev/sdb' (use -I if wanted)
I can't do mkfs on /dev/sdb1 because there is no such partition, as shown:$ ls /dev | grep sdb
sdb
EDIT 2: This is the information posted by dmesg when I plug the device in:$ dmesg
.
. (snip)
.
usb 2-1: New USB device found, idVendor=058f, idProduct=6387
usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1: Product: Mass Storage
usb 2-1: Manufacturer: Generic
usb 2-1: SerialNumber: G0905000000000010885
usb-storage: device found at 4
usb-storage: waiting for device to settle before scanning
usb-storage: device scan complete
scsi 6:0:0:0: Direct-Access FLASH Drive AU_USB20 8.07 PQ: 0 ANSI: 2
sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB)
sd 6:0:0:0: [sdb] Write Protect is off
sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00
sd 6:0:0:0: [sdb] Assuming drive cache: write through
sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB)
sd 6:0:0:0: [sdb] Write Protect is off
sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00
sd 6:0:0:0: [sdb] Assuming drive cache: write through
sdb: unknown partition table
sd 6:0:0:0: [sdb] Attached SCSI removable disk
sd 6:0:0:0: Attached scsi generic sg2 type 0
ISO 9660 Extensions: Microsoft Joliet Level 3
ISO 9660 Extensions: RRIP_1991A
SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts
CE: hpet increasing min_delta_ns to 15000 nsec
This shows that the device is formatted as ISO 9660 and that it is /dev/sdb.
EDIT 3: This is the message that I find at the bottom of dmesg after running cfdisk and writing a new partition table to the disk:SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts
sd 17:0:0:0: [sdb] Device not ready: Sense Key : Not Ready [current]
sd 17:0:0:0: [sdb] Device not ready: < ASC=0xff ASCQ=0xffASC=0xff < ASCQ=0xff
end_request: I/O error, dev sdb, sector 0
Buffer I/O error on device sdb, logical block 0
lost page write due to I/O error on sdb
Let say i have a source directory which contains The contents /foo/a /foo/b(These are the files in a directory on a remote system)
using rdiff command i make a backup as
rdiff-backup [email protected]::/foo backups
And a,b are now present in my backups directory.And then i delete file a from the remote system and again i do a sync so my local directory has the file b only.
My question is that how do i restore file a if the deletion and sync is done on the same day
Thanks..
socat - exec:'bash -li',pty,stderr,ctty - bash: no job control in this shell
What options should I use to get fully fledged shell as I get with ssh/sshd?
I want to be able to connect the shell to everything socat can handle (SOCKS 5, UDP, OpenSSL), but also to have a nice shell which correctly interprets all keys, various Ctrl+C/Ctrl+Z, tab completion, up/down keys (with remote history).
Update: Found "setsid" socat option. It fixes "no job control". Now trying to fix Ctrl+D.
Update 2: socat file:`tty`,raw,echo=0 exec:'bash -li',pty,stderr,setsid,sigint,sane. Not it handles Ctrl+D/Ctrl+Z/Ctrl+C well, I can start Vim inside it, remote history is OK.
Printing these characters in the "Canonical" format gives the output that I expect, while the default format throws me off.
$ echo " " |hexdump # Reversed?
0000000 0a20
0000002
$ echo -n " " |hexdump # Ok, fair enough.
0000000 0020
$ echo " " |hexdump -C # Canonical
00000000 20 0a | .|
00000002
With a different string, such as "123" the output is even more confusing:
$ echo "123" |hexdump
0000000 3231 0a33
0000004
The output here does not seem "reversed", but rather shuffled. Would anyone care to explain (briefly) what is going on here?
In RHEL5,
1.How to establish a VPN connection,The details i get to connect the VPN are ipaddreess ,username/password
2.How to install wifi drivers on RHEL5
Thanks..
I simply want to open up mysql to be accessible from any server ip.
I have already commented out the bind-address in /etc/mysql/my.conf.
I have already setup the user account within mysql.
I have no clue whats stopping me from connecting.
The more challenging I see this being the more I realize how much of a security risk it is, and I get that, I just want to be able to do it temporarily.
I think that the iptables firewall is the last thing that is preventing me from achieving this, but sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT is seemingly doing nothing.
First of all, I admit I'm stupid and I didn't run proper backup of my data, but you know crap happens...
So, I've used dd to overwrite the first 2GB of my 750GB NTFS partition with a FAT32 partition. I've run Photorec and EasyRecovery but all I can restore is the 2GB FAT32 partition and the files on that. Is there a way to "roll back" to the NTFS paritition, and recover - at least - some part of the 750GB data?
Thanks.
When my system boots up it shows the following message.
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: RTNETLINK answers: Invalid argument
[ OK ]
Bringing up interface eth1: RTNETLINK answers: Invalid argument
[ OK ]
Bringing up interface eth2: RTNETLINK answers: Invalid argument
[ OK ]
Bringing up interface eth3: RTNETLINK answers: Invalid argument
[ OK ]
Why is this happening. Normally it does not give the message RTNETLINK answers: Invalid argument
I did ifconfig and the output is
eth0 Link encap:Ethernet HWaddr 00:00:50:6D:56:B4
inet addr:120.0.10.137 Bcast:120.0.255.255 Mask:255.255.255.0
inet6 addr: fe80::200:50ff:fe6d:56b4/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:214 (214.0 b)
Base address:0xa000
eth1 Link encap:Ethernet HWaddr 00:00:50:6D:56:B5
inet addr:121.0.10.137 Bcast:121.0.255.255 Mask:255.255.255.0
inet6 addr: fe80::200:50ff:fe6d:56b5/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:214 (214.0 b)
Base address:0xc000
eth2 Link encap:Ethernet HWaddr 00:00:50:6D:56:B6
inet addr:128.0.10.137 Bcast:128.0.255.255 Mask:255.255.255.0
inet6 addr: fe80::200:50ff:fe6d:56b6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1006 (1006.0 b) TX bytes:396 (396.0 b)
Interrupt:16
eth3 Link encap:Ethernet HWaddr 00:00:50:6D:56:B7
inet addr:123.0.10.137 Bcast:123.0.255.255 Mask:255.255.255.0
inet6 addr: fe80::200:50ff:fe6d:56b7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:728 (728.0 b) TX bytes:396 (396.0 b)
Interrupt:17
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:980 (980.0 b) TX bytes:980 (980.0 b)
What could be the reason for the message and how to change this to normal?
Thanks
I have two internal dns servers set up and all my servers have both of them in the resolv.conf Our main dns server went down and suddenly no server could see each other. I edited a few of the servers resolv.conf manually and committed out the first (down) dns server and that machine would instantly be able to ping again. What did I do wrong, does it not auto switch to the secondary dns server when it times out?
# File managed by puppet
nameserver 192.168.146.100
nameserver 192.168.159.101
;nameserver 72.14.188.5
domain example.com
search example.com
I use puppet to install a current JDK and tomcat.
package {
[ "openjdk-6-jdk", "openjdk-6-doc", "openjdk-6-jre",
"tomcat6", "tomcat6-admin", "tomcat6-common", "tomcat6-docs",
"tomcat6-user" ]:
ensure => present,
}
Now I'd like to add
JAVA_HOME="/usr/lib/java"
export JAVA_HOME
to /etc/profile, just to get this out of the way. I haven't found a straightforward answer in the docs, yet. Is there a recommended way to do this?
In general, how do I tell puppet to place this file there or modify that file? I'm using puppet for a single node (in standalone mode) just to try it out and to keep a log of the server setup.
i have a site like twitter.com on server one and on server two i have forum, which path is like domain.com/forum
on server one i wanted to implement wild card dns and put main domain on it. but on server two i wanted to keep forum separate, i cant give sub-domain forum.domain.com, because all its links are already put in search engines and link back to domain.com/forum.
so i was wondering, how can i put domain and wild card dns on server one and still able to give path on server 2 for domain.com/forum (as sub-folder).
any ideas?
do you think htaccess can do that job? if yes, then how?
I am trying to setup per-user fastcgi scripts that will run each on a different port and with a different user. Here is example of my script:
#!/bin/bash
BIND=127.0.0.1:9001
USER=user
PHP_FCGI_CHILDREN=2
PHP_FCGI_MAX_REQUESTS=10000
etc...
However, if I add user with /bin/false (which I want, since this is about to be something like shared hosting and I don't want users to have shell access), the script is run under 1001, 1002 'user' which, as my Google searches showed, might be a security hole. My question is: Is it possible to allow user(s) to execute shell scripts but disable them so they cannot log in via SSH?
I need to (programmatically, in a shell script) upload an EAR file to an Amazon S3 bucket on Debian (5.0.4). What, if any, Debian package provides simple, scriptable tools for that?
(I want raw S3 bucket access, so please don't suggest solutions like Jungle Disk.)
I have a server racked and its redundant power supplies plugged in two APC Smart-UPS 3000 XLM. Each UPS is connected to two different mains power sources.
Two instances of apcupsd are running, each one connected to its own UPS. They can both detect when an UPS is on Battery, and each UPS can then trigger a shutdown on the server.
Question is : How NOT to shutdown if ONLY ONE UPS runs out of battery ?
Note : Smart-UPS 3000 XLM has a "Power Sync" Function that is able to connect to its peer and detect its status. But when I pulled the plug out of one of them, the Shutdown order was sent anyway. I'm thinking about modifying the shutdown scripts to check with "apcaccess" if the other ups is down. Any experience on this would be appreciated !
Alias /media/ /home/matt/repos/hello/media
<Directory /home/matt/repos/hello/media>
Options -Indexes
Order deny,allow
Allow from all
</Directory>
WSGIScriptAlias / /home/matt/repos/hello/wsgi/django.wsgi
/media is my directory. When I go to mydomain.com/media/, it says 403 Forbidden. And, the rest of my site doesn't work because all static files are 404s. Why? The page loads. Just not the media folder.
Edit: hello is my project folder.
I have tried 777 all my permissions of that folder.
How do I configure the squid to only request text/html to the parent cache; right now I am using :
cache_peer 127.0.0.1 parent 8080 0 no-query no-digest
on the second hand I get a lot of direct request that do not use the parent proxy: some queries go like FIRST_UP_PARENT and some like DIRECT, how do I tell the squid to always use parent for text/html
BTW .. is a transparent proxy
I have tried :
cache_peer 127.0.0.1 parent 8080 0 no-query no-digest
acl elhtml req_mime_type -i ^text/html$
acl elhtml req_mime_type -i text/html
cache_peer_access 127.0.0.1 allow elhtml
cache_peer_access 127.0.0.1 deny all
and it does not works
Thanks in advance for the help.
I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like:
Corrupted record (date 20091204042837
lower than 20091211065829-20000):
2009-12-04 04:28:37 root root
localhost 127.0.0.1 SMTP - 1 17480
Few more are dropped with an invalid LogFormat:
Corrupted record line 24 (record
format does not match LogFormat
parameter): 2009-11-16 04: 28:22 root
root localhost 127.0.0.1 SMTP - 14755
My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log.
Whatever is left is dropped too:
Dropped record (host localhost and
127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root
localhost 127.0.0.1 SMTP - 1 17152
I added SkipHosts="" to the .conf file but to no avail.
I feel like awstats really has some personal quarrel with me today.
Does anyone here write their own customized AutoYaST scripts for building SLES servers?
I'm not talking about generating them with yast2 autoyast.
If so, have you found a way to verify the syntax? xmllint is good as far as telling you that the XML syntax is valid, but with an upto date DTD, it can't tell you anything more, and the shipped DTDs are out-of-date.
I've opened a ticket with Novell on this, but who knows when and what I'll hear back.
Hi,
I'm having issues install PHP WebDAV onto Fedora8 - after downloading and running make install I get the following errors:
[root@ip-18-192-114-35 dav]# make install
/bin/sh /tmp/dav/libtool --mode=compile gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -o dav.lo
gcc -I. -I/tmp/dav -DPHP_ATOM_INC -I/tmp/dav/include -I/tmp/dav/main -I/tmp/dav -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /tmp/dav/dav.c -fPIC -DPIC -o .libs/dav.o
/tmp/dav/dav.c:21:23: error: ne_socket.h: No such file or directory
/tmp/dav/dav.c:22:24: error: ne_session.h: No such file or directory
/tmp/dav/dav.c:23:22: error: ne_utils.h: No such file or directory
/tmp/dav/dav.c:24:21: error: ne_auth.h: No such file or directory
/tmp/dav/dav.c:25:22: error: ne_basic.h: No such file or directory
/tmp/dav/dav.c:26:20: error: ne_207.h: No such file or directory
/tmp/dav/dav.c:35: error: expected specifier-qualifier-list before 'ne_session'
/tmp/dav/dav.c: In function 'dav_destructor_dav_session':
/tmp/dav/dav.c:152: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:153: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:155: error: 'DavSession' has no member named 'base_uri_path'
/tmp/dav/dav.c:156: error: 'DavSession' has no member named 'user_name'
/tmp/dav/dav.c:157: error: 'DavSession' has no member named 'user_password'
/tmp/dav/dav.c:158: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c: In function 'cb_dav_auth':
/tmp/dav/dav.c:194: error: 'DavSession' has no member named 'user_name'
/tmp/dav/dav.c:194: error: 'NE_ABUFSIZ' undeclared (first use in this function)
/tmp/dav/dav.c:194: error: (Each undeclared identifier is reported only once
/tmp/dav/dav.c:194: error: for each function it appears in.)
/tmp/dav/dav.c:195: error: 'DavSession' has no member named 'user_password'
/tmp/dav/dav.c: In function 'zif_webdav_connect':
/tmp/dav/dav.c:212: error: 'ne_session' undeclared (first use in this function)
/tmp/dav/dav.c:212: error: 'sess' undeclared (first use in this function)
/tmp/dav/dav.c:213: error: 'ne_uri' undeclared (first use in this function)
/tmp/dav/dav.c:213: error: expected ';' before 'uri'
/tmp/dav/dav.c:215: error: 'uri' undeclared (first use in this function)
/tmp/dav/dav.c:259: error: 'DavSession' has no member named 'base_uri_path'
/tmp/dav/dav.c:260: error: 'DavSession' has no member named 'base_uri_path_len'
/tmp/dav/dav.c:262: error: 'DavSession' has no member named 'user_name'
/tmp/dav/dav.c:264: error: 'DavSession' has no member named 'user_name'
/tmp/dav/dav.c:267: error: 'DavSession' has no member named 'user_password'
/tmp/dav/dav.c:269: error: 'DavSession' has no member named 'user_password'
/tmp/dav/dav.c:271: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c: In function 'get_full_uri':
/tmp/dav/dav.c:304: error: 'DavSession' has no member named 'base_uri_path_len'
/tmp/dav/dav.c:307: error: 'DavSession' has no member named 'base_uri_path_len'
/tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path'
/tmp/dav/dav.c:313: error: 'DavSession' has no member named 'base_uri_path_len'
/tmp/dav/dav.c:314: error: 'DavSession' has no member named 'base_uri_path_len'
/tmp/dav/dav.c: In function 'zif_webdav_get':
/tmp/dav/dav.c:329: error: 'ne_session' undeclared (first use in this function)
/tmp/dav/dav.c:329: error: 'sess' undeclared (first use in this function)
/tmp/dav/dav.c:330: error: 'ne_request' undeclared (first use in this function)
/tmp/dav/dav.c:330: error: 'req' undeclared (first use in this function)
/tmp/dav/dav.c:348: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:354: error: 'ne_accept_2xx' undeclared (first use in this function)
/tmp/dav/dav.c:359: error: 'NE_OK' undeclared (first use in this function)
/tmp/dav/dav.c:359: error: invalid type argument of '->'
/tmp/dav/dav.c: In function 'zif_webdav_put':
/tmp/dav/dav.c:377: error: 'ne_session' undeclared (first use in this function)
/tmp/dav/dav.c:377: error: 'sess' undeclared (first use in this function)
/tmp/dav/dav.c:378: error: 'ne_request' undeclared (first use in this function)
/tmp/dav/dav.c:378: error: 'req' undeclared (first use in this function)
/tmp/dav/dav.c:396: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:405: error: 'NE_OK' undeclared (first use in this function)
/tmp/dav/dav.c:405: error: invalid type argument of '->'
/tmp/dav/dav.c: In function 'zif_webdav_delete':
/tmp/dav/dav.c:422: error: 'ne_session' undeclared (first use in this function)
/tmp/dav/dav.c:422: error: 'sess' undeclared (first use in this function)
/tmp/dav/dav.c:423: error: 'ne_request' undeclared (first use in this function)
/tmp/dav/dav.c:423: error: 'req' undeclared (first use in this function)
/tmp/dav/dav.c:441: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:448: error: 'NE_OK' undeclared (first use in this function)
/tmp/dav/dav.c:448: error: invalid type argument of '->'
/tmp/dav/dav.c: In function 'zif_webdav_mkcol':
/tmp/dav/dav.c:465: error: 'ne_session' undeclared (first use in this function)
/tmp/dav/dav.c:465: error: 'sess' undeclared (first use in this function)
/tmp/dav/dav.c:466: error: 'ne_request' undeclared (first use in this function)
/tmp/dav/dav.c:466: error: 'req' undeclared (first use in this function)
/tmp/dav/dav.c:484: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:491: error: 'NE_OK' undeclared (first use in this function)
/tmp/dav/dav.c:491: error: invalid type argument of '->'
/tmp/dav/dav.c: In function 'zif_webdav_copy':
/tmp/dav/dav.c:510: error: 'ne_session' undeclared (first use in this function)
/tmp/dav/dav.c:510: error: 'sess' undeclared (first use in this function)
/tmp/dav/dav.c:511: error: 'ne_request' undeclared (first use in this function)
/tmp/dav/dav.c:511: error: 'req' undeclared (first use in this function)
/tmp/dav/dav.c:539: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:550: error: 'NE_DEPTH_INFINITE' undeclared (first use in this function)
/tmp/dav/dav.c:550: error: 'NE_DEPTH_ZERO' undeclared (first use in this function)
/tmp/dav/dav.c:554: error: 'NE_OK' undeclared (first use in this function)
/tmp/dav/dav.c:554: error: invalid type argument of '->'
/tmp/dav/dav.c: In function 'zif_webdav_move':
/tmp/dav/dav.c:573: error: 'ne_session' undeclared (first use in this function)
/tmp/dav/dav.c:573: error: 'sess' undeclared (first use in this function)
/tmp/dav/dav.c:574: error: 'ne_request' undeclared (first use in this function)
/tmp/dav/dav.c:574: error: 'req' undeclared (first use in this function)
/tmp/dav/dav.c:598: error: 'DavSession' has no member named 'sess'
/tmp/dav/dav.c:611: error: 'NE_OK' undeclared (first use in this function)
/tmp/dav/dav.c:611: error: invalid type argument of '->'
make: *** [dav.lo] Error 1
Any help would be much appreciated. Thanks!
I currently have a VPS that is consuming a ton of outgoing bandwidth and I am trying to drill down to where this may be coming from. Does anyone know of a logical way to go about finding out which pages on the site are consuming the most outgoing data. We have done a ton of front-end optimizations to the site and our google page speed rankings ar 85% so I feel we have done a pretty great job at optimizing the site for speed.
Can someone lend some insight on how they have made similar optimizations?
Application / Server Stack
LEMP Running Varnish Cache / PHP5-FPM
WordPress running w3 Total Cache
Ubuntu 12.04 LTS
I'm trying to get my music folder into something sensible. Right now, I have all my music stored in /home/foo so I have all of the albums soft linked to ~/music. I want the structure to be ~/music/<artist>/<album> I've got all of the symlinks into ~/music right now so I just need to get the symlinks into the proper structure. I'm trying to do this by delving into the symlinked album, getting the artist name with id3info. I can do this, but I can't seem to get it to work correctly.
for i in $( find -L $i -name "*.mp3" -printf "%h\n")
do
echo "$i" #testing purposes
#find its artist
#the stuff after read file just cuts up id3info to get just the artist name
#$artist = find -L $i -name "*.mp3" | read file; id3info $file | grep TPE | sed "s|.*: \(.*\)|\1|"|head -n1
#move it to correct artist folder
#mv "$i" "$artist"
done
Now, it does find the correct folder, but every time there is a space in the dir name it makes it a newline.
Here's a sample of what I'm trying to do
$ ls
DJ Exortius/
The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@
I'm trying to mv The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ into the real directory DJ Exortius. DJ Exortius already exists, so it's just a matter of moving it into the correct directory that's based on the id3 tag of the mp3 inside.
Thanks!
PS: I've tried easytag, but when I restructure the album, it moves it from /home/foo which is not what I want.
Each time electricity goes down, my desktop (without UPS) loses some temporary information.
Opera can lose settings, history, cache, or mail accounts (Thanks heavens I was wise to use IMAP). Partially or all together.
a whole file (complete and save) in Geany appeared empty (and I didn't commit it to Git)
rhythmbox lost all podcasts subscription data
I'm afraid there are other losses I just didn't see.
What's the reason? A memory files cache, a mem-disk? Or non-atomic file writes in xfs? I have Ubuntu 9.10 and XFS on both / and /home partitions.
Is ext4 safer in such circumstances? I've seen ext3 is faster. Is it as safe as *4?
Given that the apartment I rent is connected to a common bus and 1 safety switch for several apartments, and the neighbors - alone or together - overload it at least once every week, the lights go down often enough for this to be an issue.