Search Results

Search found 435 results on 18 pages for 'symbolic'.

Page 12/18 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • mysqld service crashes on restart, after importing mysqldump #innodb

    - by ubunut
    I have 2 mysql servers. Let's call them server01 & server02. Both have the same configuration: mysqladmin Ver 8.42 Distrib 5.1.61, for redhat-linux-gnu on x86_64 [client] default-character-set=utf8 [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_allowed_packet = 16M default-character-set=utf8 default-collation=utf8_unicode_ci character-set-server=utf8 collation-server=utf8_unicode_ci default-storage-engine = InnoDB innodb_data_home_dir = /var/lib/mysql innodb_log_group_home_dir = /var/lib/mysql innodb_data_file_path = ibdata1:10M:autoextend innodb_additional_mem_pool_size = 2M innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 1 innodb_buffer_pool_size = 700M table_cache = 300 thread_cache_size = 4 query_cache_size = 200m query_cache_limit = 10m [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I make a mysqldump on server01: mysqldump -uuser -ppassword --all-databases testservers.sql (most tables in these databases are innodb, some of the mysql.* tables are Innodb too) Then I import the testservers.sql on server02: mysql -uuser < testservers.sql (mysqld has been started with --skip-network). So far so good, I can login into mysql & everything seems to be ok. BUT when I exit to the shell and execute service mysqld restart, The service fails to start. stack-trace in /var/log/mysqld.log: 121022 14:53:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121022 14:53:19 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 121022 14:53:19 [Warning] '--default-collation' is deprecated and will be removed in a future release. Please use '--collation-server' instead. 12:53:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=0 max_threads=151 thread_count=0 connection_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338324 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x267e630 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fff3efe0be0 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x29) [0x84bd89] /usr/libexec/mysqld(handle_fatal_signal+0x483) [0x6a0be3] /lib64/libpthread.so.0() [0x338d60f500] /usr/libexec/mysqld(ha_resolve_by_name(THD*, st_mysql_lex_string const*)+0x81) [0x6956e1] /usr/libexec/mysqld(open_table_def(THD*, st_table_share*, unsigned int)+0xe0a) [0x60e5ba] /usr/libexec/mysqld(get_table_share(THD*, TABLE_LIST*, char*, unsigned int, unsigned int, int*)+0x20b) [0x602b0b] /usr/libexec/mysqld() [0x603597] /usr/libexec/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x7a1) [0x6079a1] /usr/libexec/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5d0) [0x608570] /usr/libexec/mysqld(open_and_lock_tables_derived(THD*, TABLE_LIST*, bool)+0x6a) [0x60877a] /usr/libexec/mysqld(plugin_init(int*, char**, int)+0x622) [0x715af2] /usr/libexec/mysqld() [0x5bd3b2] /usr/libexec/mysqld(main+0x1b3) [0x5bfc93] /lib64/libc.so.6(__libc_start_main+0xfd) [0x338d21ecdd] /usr/libexec/mysqld() [0x5087b9] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 0 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 121022 14:53:19 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended A typical mysqdump entry looks like this: DROP TABLE IF EXISTS `adodb_logsql`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `adodb_logsql` ( `id` bigint(10) unsigned NOT NULL AUTO_INCREMENT, `created` datetime NOT NULL, `sql0` varchar(250) NOT NULL DEFAULT '', `sql1` text, `params` text, `tracer` text, `timer` decimal(16,6) NOT NULL DEFAULT '0.000000', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='to save some logs from ADOdb'; /*!40101 SET character_set_client = @saved_cs_client */; IF I change all occurrences of "ENGINE=InnoDB" to "ENGINE=MyISAM" before import, then the service has no problem restarting. I'm quite puzzled as to what's happening, maybe I'm just an idiot, then by all means tell me so. Any help would be greatly appreciated!

    Read the article

  • need assistance with my.cnf - 1500% CPU usage

    - by Alan Long
    I'm running into a few issues with our new database server. It is a HP G8 with 2 INTEL XEON E5-2650 processors and 32GB of ram. This server is dedicated as a MySQL server (5.1.69) for our intranet portal. I have been having issues with this server staying alive - I notice high CPU usage during certain times of day (8% ~ 1500%+) and see very low memory usage (7 ~ 15%) based on using the 'top' command. When the CPU usage passes 1000%, that is when the app usually dies. I'm trying to see what I'm doing wrong with the config file, hopefully one of the experts can chime in and let me know what they think. See below for my.cnf file: [mysqld] default-storage-engine=InnoDB datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock #user=mysql large-pages # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_connections=275 tmp_table_size=1G key_buffer_size=384M key_buffer=384M thread_cache_size=1024 long_query_time=5 low_priority_updates=1 max_heap_table_size=1G myisam_sort_buffer_size=8M concurrent_insert=2 table_cache=1024 sort_buffer_size=8M read_buffer_size=5M read_rnd_buffer_size=6M join_buffer_size=16M table_definition_cache=6k open_files_limit=8k slow_query_log #skip-name-resolve # Innodb Settings innodb_buffer_pool_size=18G innodb_thread_concurrency=0 innodb_log_file_size=1G innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_lock_wait_timeout=50 innodb_file_per_table #innodb_buffer_pool_instances=4 #eliminating double buffering innodb_flush_method = O_DIRECT flush_time=86400 innodb_additional_mem_pool_size=40M #innodb_io_capacity = 5000 #innodb_read_io_threads = 64 #innodb_write_io_threads = 64 # increase until threads_created doesnt grow anymore thread_cache=1024 query_cache_type=1 query_cache_limit=4M query_cache_size=256M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 0 wait_timeout = 1800 connect_timeout = 10 interactive_timeout = 60 [mysqldump] max_allowed_packet=32M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-slow-queries=/var/log/mysql/slow-queries.log long_query_time = 1 log-queries-not-using-indexes we connect to one database with 75 tables, the largest table has 1,150,000 entries and the second largest has 128,036 entries. I have also verified that our PHP queries are optimized as best as possible. Reference - MySQLtuner: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.69-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 420M (Tables: 75) [!!] Total fragmented tables: 75 -------- Security Recommendations ------------------------------------------- [!!] User '[email protected]' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 1h 14m 50s (8M q [1K qps], 705 conn, TX: 6B, RX: 892M) [--] Reads / Writes: 68% / 32% [--] Total buffers: 19.7G global + 35.2M per thread (275 max threads) [!!] Maximum possible memory usage: 29.1G (93% of installed RAM) [OK] Slow queries: 0% (472/8M) [OK] Highest usage of available connections: 66% (183/275) [OK] Key buffer size / total MyISAM indexes: 384.0M/91.0K [OK] Key buffer hit rate: 100.0% (173 cached / 0 reads) [OK] Query cache efficiency: 96.2% (7M cached / 7M selects) [!!] Query cache prunes per day: 553614 [OK] Sorts requiring temporary tables: 0% (3 temp sorts / 1K sorts) [!!] Temporary tables created on disk: 49% (3K on disk / 7K total) [OK] Thread cache hit rate: 74% (183 created / 705 connections) [OK] Table cache hit rate: 97% (231 open / 238 opened) [OK] Open file limit used: 0% (17/8K) [OK] Table locks acquired immediately: 100% (432K immediate / 432K locks) [OK] InnoDB data size / buffer pool: 420.9M/18.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 256M) [see warning above] Thanks in advanced for your help!

    Read the article

  • FontExplorer X + Dropbox

    - by ohnno
    I'm looking to sync my fonts between my Macbook and iMac using a combination of FontExplorer X and Dropbox. I've seen a few blog posts that confirm that it is possible by using a symbolic link on the database folder. But for some reason I can't seem to find that folder (I'm thinking it may have been changed/moved in the newer releases of FEX?) It in this line [Application Support/Linotype/Font Explorer/fontexplorer x.fexdb] But the Linotype no longer exists in Application Support. Any help with this would be greatly appreciated!

    Read the article

  • Retrieving an RSA key from a running instance of Apache?

    - by Nathan Osman
    I created an RSA keypair for an SSL certificate and stored the private key in /etc/ssl/private/server.key. Unfortunately this was the only copy of the private key that I had. Then I accidentally overwrote the file on disk (yes, I know). Apache is still running and still serving SSL requests, leading me to believe that there may be hope in recovering the private key. (Perhaps there is a symbolic link somewhere in /proc or something?) This server is running Ubuntu 12.04 LTS.

    Read the article

  • Using Komodo IDE as Text editor from the OS X terminal

    - by lexu
    According to this URL I should be able to start Komodo IDE from the command line when I want to edit a file. I set up the symbolic link using (on single line) ln -sf "/Applications/Komodo IDE.app/Contents/MacOS/komodo" /Users/lexu/bin/komodo but when I type afg-2:~ lexu$ komodo .bash_profile I get dyld: Library not loaded: /usr/lib/libsqlite3.dylib Referenced from: /System/Library/Frameworks/Security.framework/Versions/A/Security Reason: Incompatible library version: Security requires version 9.0.0 or later, but libsqlite3.dylib provides version 1.0.0 /Applications/Komodo IDE.app/Contents/MacOS/run-mozilla.sh: line 131: 4370 Trace/BPT trap "$prog" ${1+"$@"} and a dialog that says My guess is I need to somehow let Komodo know it needs to use different libraries? Does someone have this working?

    Read the article

  • droid cam makefile understanding and error

    - by nerorevenge
    I tried installing the droid cam on my fedora 19 (64 bit) . Link to the droid cam application is here and whenever I try to install it , the Makefile which is as follows is invoked obj-m := v4l2loopback-dc.o all: make -C /lib/modules/`uname -r`/build M=`pwd` test: gcc test.c -o test clean: make -C /lib/modules/`uname -r`/build M=`pwd` clean insmod: sudo insmod v4l2loopback-dc.ko width=320 height=240 rmmod: sudo rmmod v4l2loopback-dc.ko and here is the error -- INSTALL: Webcam parameters: '320' and '240' -- INSTALL: Building v4l2loopback-dc.ko make -C /lib/modules/`uname -r`/build M=`pwd` make: *** /lib/modules/3.9.5-301.fc19.x86_64/build: No such file or directory. Stop. make: *** [all] Error 2 -- INSTALL: v4l2loopback-dc.ko not built.. Failure build happens to be a symbolic link.I was wondering what exactly is the makefile trying to and why is it failing?

    Read the article

  • Apache VirtualHost running very slow on OS X 10.7 (Lion)

    - by jwerre
    I've set up a few virtual hosts in Lion and it's running very slowly. NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "/Library/WebServer/Documents" </VirtualHost> <VirtualHost *:80> ServerName dev.local DocumentRoot "/Users/me/mysite" <Directory /Users/me/mysite> Order allow,deny Allow from all </Directory> </VirtualHost> then in /etc/hosts I added 127.0.0.1 dev.local Everything works fine but it's sooooo slow — 5 or so second to reload a simple "Hello World" html page. Here's is the strange part. If I make a symbolic link of the site in my ~/Sites folder (ln -s ~/mysite ~/Sites/mysite) and navigate to http://localhost/~me/mysite It's nice and fast the way it should be.

    Read the article

  • Python version priority in OSX/UNIX PATH environment variable

    - by mindthief
    Hi all, I want my system to use /usr/bin/python, but it's currently using /opt/local/bin/python, which points to /usr/bin/python2.6. I tried modifying the PATH variable in my .bashrc as PATH=~/bin:$PATH ...and then set a symbolic link in ~/bin to point to /usr/bin/python. i.e. ~/bin/python --> /usr/bin/python I figured this might prioritize this symlink over the /opt/local version if it came before the other one in the PATH variable, but when I opened a new shell I still found python pointing to /opt/local/bin. Any advice on a good way to get the system to use /usr/bin/python? Also, I usually use ipython as opposed to python directly. I'm assuming that if the system starts to use the correct version of python then ipython would also use that version? If not, how could I also get ipython to use the correct version? Thanks!

    Read the article

  • restricting access only through domains on nginx on virtual hosts

    - by Mo J. Mughrabi
    I have finished setting up nginx for virtual hosting, this is how my config files look like server { listen 80; server_name domain.com; access_log /home/domain.com/prod_webapp/logs/access.domain.com.log; error_log /home/domain.com/prod_webapp/logs/error.domain.com.log; location /static { root /home/domain.com/prod_webapp/mocorner/ph/; } location / { try_files $uri @uwsgi; } location @uwsgi { include uwsgi_params; uwsgi_pass unix:/tmp/domain_uwsgi.sock; }} on the same machine, I have domain1.com and domain2.com, each when i access I get its content which is great. My problem is that when i try to access the user using the IP address i get one of the sites in the virtual hosts too.. Although i disabled the default (removed the symbolic link) from sites-enabled folder but still not solved it for me. any suggestions?

    Read the article

  • Problem after system update. Root permission denied, user lib permission denied.

    - by gregor
    As I updated opensuse 11.1 with update packages from october and november 2009, I couldnt use the command ping. For root it gives Permission denied and for a regular user I get libresolv.so.2: cannot open shared object file: Permission denied. The other coulprit besides the update could be the instalation of google-chrome (.deb file to .rpm, some symbolic links for libs to make chrome work). When the system rebooted, X server also became blank. Before the reboot it worked, as did chrome, but the ping command didnt work even before the reboot. Any ideas? I ran some sort of disk check from a rescue CD, libresolv seems as other libs, root has uid=0 ...

    Read the article

  • Changing Win 8.1 Pro Store App Installation Folder

    - by ChiliYago
    I have a Dell Venue 8 Pro tablet with Windows 8.1. The c:\ is very limited capacity at only 24GB. With the OS only on the c:\ I just have 8.6GB of free space. So I have added a 64gb SSD to the machine and want to install all my apps on it and not the C Drive. I know the apps get installed in c:\Program Files\WindowsApps folder so I have ROBOCOPY'd all the files to d:\Program Files\WindowsApps and created a symbolic link using mklink. It seems that new apps are install correctly now on d:\ however the existing apps fail to open. What is the Microsoft's solution to making this happen seamlessly?

    Read the article

  • Format as NTFS without Journal

    - by palswim
    I have a flash drive that I'd like to format for use in Windows. I would like support for symbolic links, so I can't use FAT/FAT32/exFAT. I would prefer to use the ext4 filesystem and disable journaling, with the Ext2Fsd filesystem driver, but have (so far) found that I can't make soft links across filesystems that Windows will read, Ext2Fsd has an annoying bug about always mounting partitions as read-only and has problems resuming from sleep, and some programs have problems writing to the partition even after manually configuring Ext2Fsd to allow writes. So, I would like to use NTFS for the flash drive, but disable the journaling feature (causes extra writes), if possible. How can I do this?

    Read the article

  • tag structured Filesystems

    - by A.Rashad
    I hope this is the correct site, I lose my way between the 4 sister sites :) Let me ask the question this way. all file systems I have seen before are hierarchical, that means a root directory, with some branched directories, and so on until we have files residing in these directories. except for AS/400 file structure, where it has a concept of a Library that serve somehow as a directory but one level only. Why not have directory-less filesystems where files are placed in a single location, but the file identifiers would be referenced by a database of tag/ file relation ships. This way there will be no need for symbolic links, one file may have multiple relations to multiple subjects, not only a single parent directory to contain. I hope the idea is clear.

    Read the article

  • Is there any way to make cherokee server portable?

    - by Tom
    I develop on different machines. I use MAMP, I have it installed on my dropbox folder and created symbolic links to the applications folder. That way if I work one day on my desktop and make changes to let's say a database schema and next day I work from my laptop I won't have to do any db migration stuff the same applies for all the apache virtual hosts I have setup using MAMP. Everything is portable. I recently started using Cherokee server and I like it a lot. I would like to replace MAMP with Cherokee but first I need to be able to make it portable. I don't want to have to configure multiple virtual hosts, settings, etc., on multiple machines. Is there any way I can set up Cherokee to be as portable as MAMP? What if I want to run Cherokee from a thumbdrive?

    Read the article

  • Perforce Proxy Server: Caching selective files [closed]

    - by fbrereto
    I just set up a Perforce proxy server for work. I'm noticing the cache directory is filling up very quickly -- with files I know I will never need. For example, there is a 'sandbox' directory in the depot where users keep personal branches and other work; a p4 sync is causing the p4 proxy cache to grab these user's sandboxes when I'll never need them. I would create a symbolic link for the sandbox directory to /dev/null but then I wouldn't be caching my sandbox, which I am interested in. Is there any way to tell the perforce proxy something to the effect of "if I haven't had to sync it, please don't cache it?"

    Read the article

  • How to share files between cPanel accounts?

    - by Darren
    I am setting up a multi-site/multi-store Magento installation, and I want each site to have its own cPanel account so I can setup the SSL and dedicated IP properly. I have tried to create a linux group called 'magento' and changed the files I need to share to that group (even added the users to that group), however when I try to access files through my scripts on those accounts it doesn't acknowledge the files exist. I first made a soft symbolic link which didn't work and then including them to their real location but it didn't work. Am I missing a step in allowing which users can access which files? I added the users to the magento group and like I said changed the group of the files I need to share to them but it's still not working. Thanks, Darren

    Read the article

  • Redmine VirtualHost config not working with Document Root

    - by David Kaczynski
    I am trying to have requests for https://redmine.example.com access my redmine instance, but I am just getting an "Index of /" page with the contents of /var/www/redmine (which is a symbolic link to /usr/share/redmine/public). My VirtualHost config: <VirtualHost *:443> ServerName redmine.example.com DocumentRoot /var/www/redmine SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> My /etc/apache2/sites-enables/redmine: RailsBaseURI /redmine How do I get the requests for https://redmine.example.com to correctly launch my redmine instance?

    Read the article

  • How do I unmount a tmpfs that is missing from /etc/mtab?

    - by vrinek
    I have the following line in /etc/fstab: none /home/hydra/tmp tmpfs user,noauto,size=1000M,uid=1001,gid=1001 0 0 I can do mount ~/tmp as user hydra and it gets mounted ok. The only problem is that even thought it gets added to /proc/mounts, it does not get added to /etc/mtab. When I try a umount ~/tmp (again as hydra) it complains: umount: /home/hydra/tmp is not mounted (according to mtab) And when I try -f or -n, it complains that I am not root. Some more info on the system that manifests this problem: On sudo umount /home/hydra/tmp, the fs gets unmounted (I think I needed to used -f too) Debian version is testing mount --version - mount from util-linux 2.19.1 (with libblkid and selinux support) ls -l /etc/mtab - -rw-r--r-- 1 root root 921 Nov 14 09:08 /etc/mtab cat /proc/mounts | grep rootfs - rootfs / rootfs rw 0 0 /home, /home/hydra nor /home/hydra/tmp are symbolic links

    Read the article

  • Allow READ access to local folders in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Everytime I would uncheck the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • "sh: /usr/sbin/xenstored: not found" - But it's there?

    - by Matt H
    What would cause running the file /usr/sbin/xenstored to print sh: /usr/sbin/xenstored: not found However, the file /usr/sbin/xenstored is there and is not a symbolic link. Actually I should be running this as root. That prints a similarly odd message. sudo: unable to execute /usr/sbin/xenstored: No such file or directory By the way, xenstored is not a script, it's an ELF executable. My guess is that it's because I haven't gotten all the dependent libraries installed. However, I would expect it to say something like this: ./xenstored: error while loading shared libraries: libxenctrl.so.4.0: cannot open shared object file: No such file or directory Which is true of running xenstored on a system that doesn't have all the required libraries. Why do I get "not found" vs the much more useful "cannot open shared object file"?

    Read the article

  • Allow WRITE access to local folders machine in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Every time I would un-check the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • mysql single database relocation

    - by asdmin
    I would like to know if it's possible to operate different databases on different filesystem locations. Background: we are a hosting service, which hosts mysql, web, and smtp to it's customer, but all our services (sql, smtp, http) are located in a different place. We are going to assign a single logical volume to a customer, which will accommodate the customer's mailing, weppages and (hopefully) sql database. Web pages and mailing are already covered, but I am not able to find a configuration setting which would enable me to specify the location of a database (the directory where mysql stores the DB). Let me please highlight, the target here is to relocate different databases to different locations in the filesystem, not moving them from a single place to an another (single) place. Also please do not bother answering with soft and hard symbolic links. ;) Thanks

    Read the article

  • Sync Banshee library data.

    - by Dom
    I use Banshee to organise my music, I particularly like its scoring system and I have smart playlists based on it. However, I have two versions of my music library, one on each of my computers. As one of the computers is small I only have a favourite set of songs on that computer rather than my whole collection. The computers are not on a local network, but I do use Ubuntu One for file sharing between them. Is there any way I can synchronise song data (play count, score, skip count ...) and playlist data (including smart playlists that include songs based on this data) between the two computers? This would only be relevant of course for the songs that exist on both computers, the songs that exist on only one would need to be ignored. I did consider putting the library data file (I think it is .xml but I'm not sure) into the shared file and creating a symbolic link to it, but then I wouldn't be able to have a different set of songs on each computer. Thank you.

    Read the article

  • Does CPIO produce platform dependant archives?

    - by TiCL
    I made a CPIO archive with following command on Solaris 11 (SPARC): find . | cpio -ov >/tmp/myarchive.cpio I copied it to Intel based Solaris 11 machine and tried to extract using the following command cpio -icvdu < myarchive.cpio It gives me following error: cpio: Not a cpio file, bad header. 1 errors The MD5SUM hash matches and I can extract it on another SPARC machine. My question, does CPIO produce platform dependant output? Is there any way to convert it? I cannot use TAR at this moment, because the directory I am archiving has long symbolic links that are skipped by TAR command

    Read the article

  • How to execute a script while booting Linux from flashdrive?

    - by Usman Ajmal
    I know i can make my script run at boot time in runlevel 2 by putting it in /etc/init.d/ and creating a symbolic to it in /etc/rc2.d but thats when Linux is on hard drive. I want to run my script from a flashdrive such that when a user plug in a flashdrive and powers ON the machine, it may start booting from the OS in flash drive and eventually executing my script. How can I achieve such a functionality? I have tried burning OS to flashdrive but I never succeeded in booting OS from flashdrive.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18  | Next Page >