Search Results

Search found 34274 results on 1371 pages for 'mysql table'.

Page 324/1371 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Recommended Free Web Hosting Providers?

    - by Noah Goodrich
    I have been tasked with finding a low budget (translate free) web hosting service for a Facebook group and would like feedback from the professional community as to their experiences with free hosting providers in general, as well as any specific recommendations or evaluations of free (or very inexpensive) providers. Our specific needs for this project include: PHP 5 Support Mysql 5.0 Support Apache 2 Ability to use mod_rewrite in .htaccess files

    Read the article

  • testdisk - recover partition table

    - by Evaggelos Balaskas
    I destroyed my partition table of my laptop. Testdisk reports the below Disk laptop.img - 250 GB / 232 GiB - CHS 30402 255 63 (RO) Partition Start End Size in sectors >P MS Data 435868 456606 20739 [NO NAME] P MS Data 19232600 19235479 2880 [NO NAME] D MS Data 41945087 83890143 41945057 D MS Data 57151486 168579069 111427584 D MS Data 67637246 141037565 73400320 D MS Data 151523326 193466365 41943040 D MS Data 170617328 170618223 896 D MS Data 170631168 170634047 2880 D MS Data 171338232 171344405 6174 [Boot] D MS Data 172008235 172231918 223684 [NO NAME] P MS Data 193466368 214437887 20971520 D MS Data 217321375 225321678 8000304 [root] D MS Data 224923646 308809725 83886080 [media] D MS Data 308809728 420237311 111427584 D MS Data 418910206 481824765 62914560 [vmimages] my partition table had 3 Primary Partitions. 1. WinXP Home 2. /boot 3. LVM inside LVM i had 9 or 10 LVM partitions One of them was my home (encrypted with luks) testdisk cant recover my partition table or any other partition. Partitions with [P] doesnt have any useful data. I want to use dd to extract the partitions and try to recover as many files i can. Any ideas of how i can extract eg. the [root] lvm partition from the above testdisk report ? I am afraid that my disk was also corrupted.

    Read the article

  • Create a partition table on a hardware RAID1 drive with [c]fdisk

    - by Lev Levitsky
    My question is, is there a reason for this not to work? Details: I have two 500 Gb drives, and my motherboard RAID support, so I created a RAID1 array and booted from a Linux live medium. I then listed the disks and, apart from the obvious /dev/sda, /dev/sdb, etc. there was /dev/md126 which, I figured, was the mirrored "virtual" drive. Its size was 475 Gb; I had seen that the size of the array would be smaller than 500 Gb when I was creating it, so no surprise there. I did cfdisk /dev/md126, created the necessary partitions and chose write. It's been about half an hour now, I think. It doesn't seem like it's ever going to finish. The only thing about cfdisk in dmesg is that it's "blocked for more than 120 seconds". Doing fdisk -l /dev/md126 in another terminal I see all three partitions I created and a note that "Partition 1 does not start on a physical sector boundary". The table is lost after reboot, though. I tried to partition /dev/sda individually, and it worked, the table was written in about a second. The "not on a physical sector boundary" message is there, too. EDIT: I tried fdisk on /dev/sda, then there were no messages about sector boundaries. After a reboot, I am able to use mkfs on /dev/dm126p1, etc. fdisk shows that /dev/md126 has the same partitions as /dev/sda (but /dev/sdb doesn't have any). But at some point ("writing superblock and filesystem accounting information") mkfs is also blocked. Using it on sda1 results in a "partition is used by the system" error. What can be the problem? EDIT 2: I booted a freshly updated system from a pendrive and was able to create partition table and filesystems on /dev/md126 without any apparent problems. Was it an issue with the support of the hardware? My MB is Asus P9X79.

    Read the article

  • EBS with RAID0 (striping) and restoring snapshots

    - by grourk
    We have a MySQL database on EC2 and are looking at the disk IO performance there. Currently we have a single EBS volume with XFS and take snapshots for backup. It seems that a lot of people have seen significant performance gains by striping across multiple EBS volumes with software RAID. If this is done, how does one take snapshots and ensure the consistency of the file system? It seems to me that restoring the file system from multiple snapshots could be tricky.

    Read the article

  • Internal server error,godaddy problem?

    - by user303832
    I added subdomain,create a new folder,than put some files in folder,when I try to access my new subdomain I get this message 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. Can someone help me with this please?It is linux hosting with mysql database.

    Read the article

  • Backing up data from server to laptop ?

    - by Patrick
    I need a tool to automatically backup my Drupal installations from my server to my laptop. In other terms, I need to copy 1 folder (all my Drupals are inside this folder) and all databases. So I was wondering if I just need to write a script on my laptop connecting to the server every week copying the folder with all mysql databases informing me by email if the backup has been succesfull do you know if I can find some tutorial for it ? or download such script ? thanks

    Read the article

  • Setup secure shared hosting (Apache, PHP, MySQL)

    - by Apaz
    So I'm setting up a shared hosting with Apache, PHP, MySQL and the biggest question mark is how to do with PHP, since there is a million options out there how to configure it securely. The plan is: Chroot for MySQL (built in support for chroot) Chroot for Apache (mod_security) Each user executing their PHP-scripts as their own user (see below) Set open_basedir Disable all "evil" php-functions (allow_url_fopen, system, exec, and so on) Ive looked at suexec and suphp but they seems very slow; http://blog.stuartherbert.com/php/2007/12/18/using-suexec-to-secure-a-shared-server/ http://blog.stuartherbert.com/php/2008/01/18/using-suphp-to-secure-a-shared-server/ So I've looked some more and found some other solutions: apache2-mpm-itk + mod_php(?) mod_fcgid + php-fpm mod_fastcgi + php-fpm Ive tried a simple setup with mod_fastcgi + php-fpm and it seems to work, runs as correct user and so on, but the protection against directory traveling is still open_basedir(?) One solution for that could be to use php-fpm's chroot option, but that causes a lot of other issues like domain name resolver does not work sending mail does not work Tips?

    Read the article

  • Installing Apache Lucene for LAMP server

    - by Pawan
    I have Ubuntu running for LAMP (Linux, Apache, MySQL and PHP) server. To provide better search capability one of my friend recommended to install "Apache Lucene". While reading about it I came to know that "Apache Lucene" required tomcat and java to run. Please let me know if it be feasible to have it or there are other better alternates for LAMP stack. I am looking for some proven solution. Thanks :)

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • Cumulative average using data from multiple rows in an excel table

    - by Aaron E
    I am trying to calculate a cumulative average column on a table I'm making in excel. I use the totals row for the ending cumulative average, but I would like to add a column that gives a cumulative average for each row up to that point. So, if I have 3 rows I want each row to have a column giving the average up to that row and then the ending cumulative average in the totals row. Right now I can't figure this out because I'd be having to reference in a formula rows above and below the current row and I'm unsure about how to go about it because it's a table and not just cells. If it was just cells then I know how to do the formula and copy it down each row, but being that the formula I need depends on whether or not a new row in the table is added or not I keep thinking that my formula would be something like: (Completion rate row 1/n) where n is the number of rows up to that point, here row 1, then ((Completion rate row 1 + Completion rate row 2)/n) for row 2 so n=2, and so on for each new row added. Please advise.

    Read the article

  • Upgrading PHP 5.1 to 5.3 on Linux Server

    - by nicorellius
    I trying to find the best way to upgrade from PHP 5.1 to 5.3. The CRM software I am running on this server requires this upgrade or else I probably wouldn't even perform it, because it seems like it's going to be perhaps trickier than I hoped it would be. Being still new to the programming world, these routine upgrades are still worrisome to me. I am running apache 2.2.6 (Fedora), PHP 5.1.6 and MySQL 5.0.27 on this server.

    Read the article

  • Error message: sudo: mysql_secure_installation: command not found

    - by Craig
    I'm trying to learn PHP and I'm just in the process of installing apache,msql and PHP. I'm on a mac osx 10.7.2 Processor 2.26 GHz intel core 2 due I downloaded: mysql-5.5.18-osx10.6-x86_64.dmg I'm now at the point of setting the root password and when I type in: sudo: mysql_secure_installation it asks me to enter my password, then it says: sudo: mysql_secure_installation: command not found Any help would be appreciated. Thanks

    Read the article

  • Determining which database instance makes biggest IO

    - by user2008937
    Assuming that I have a dedicated server on which I am running multiple instances of mysql and postresql servers. How without iotop determine which instance in particular time (proc/pid/io shows data collected in some peroid of time) makes the biggest IO (so it increases IOWAIT)? When lots of ppl do something on DB then I clearly see which instance is making the load because of high cpu usage, but I had a situation when the cpu usage was just normal, but very high iowait made a huge load on server and i had problem finding process that was making some outstanding IO

    Read the article

  • Innotop and Monit to kill thread using too much resources

    - by pocesar
    Instead of restarting the whole MYSQL process, sometimes I just want to kill the offending thread instead of making everything go down. Usually the spike in CPU is when a bot is crawling the first pages of pagination of my site (over 70.000 paginated results, 45 items per page). Is there a way I could do this automatically using monit and innotop? I couldn't find relevant information on Google, that's why I'm asking here. If these two tools aren't par to the task, which ones should I use?

    Read the article

  • Migrating users and mailboxes from postfix / Maildir to Postfix with Mysql backend [closed]

    - by Chrispy
    Possible Duplicate: Migrating users and mailboxes from postfix / Maildir to Postfix with Mysql backend So I've got 60 or so users on a hand rolled postfix installation on openbsd and I'd like to move their mailboxes to our new mail server running iRedMail (postfix, vmail/mysql back end) Does anyone know of a good way to do this? Preferably a script I can run to keep syncing the users mailboxes as MX records get updated? I presume one way (though I don't have all their passwords!) would be to have a command line imap client that simulated the users copying their mail themselves but I'm sure there must be a shell / php script to migrate users? Anyone got any bright ideas? Chris.

    Read the article

  • Creating a test database with copied data *and* its own data

    - by Jordan Reiter
    I'd like to create a test database that each day is refreshed with data from the production database. BUT, I'd like to be able to create records in the test database and retain them rather than having them be overwritten. I'm wondering if there is a simple straightforward way to do this. Both databases run on the same server, so apparently that rules out replication? For clarification, here is what I would like to happen: Test database is created with production data I create some test records that I want to keep running on the test server (basically so I can have example records that I can play with) Next day, the database is completely refreshed, but the records I created that day are retained. Records that were untouched that day are replaced with records from the production database. The complication is if a record in the production database is deleted, I want it to be deleted on the test database too, so I do want to get rid of records in the test database that no longer exist in the production database, unless those records were created within the test database. Seems like the only way to do this would be to have some sort of table storing metadata about the records being created? So for example, something like this: CREATE TABLE MetaDataRecords ( id integer not null primary key auto_increment, tablename varchar(100), action char(1), pk varchar(100) ); DELETE FROM testdb.users WHERE NOT EXISTS (SELECT * from proddb.users WHERE proddb.users.id=testdb.users.id) AND NOT EXISTS (SELECT * from testdb.MetaDataRecords WHERE testdb.MetaDataRecords.pk=testdb.users.pk AND testdb.MetaDataRecords.action='C' AND testdb.MetaDataRecords.tablename='users' );

    Read the article

  • Fixing damaged partition table

    - by dr4cul4
    This is continuation of Recover Extended Partition , but this time I have different problem related partition table it self. I managed to restore partition that I needed and backed up files that were crucial to me (at least those that I had space to store somewhere) OK now get to the problem. My partition table is corrupted, booting RIP Linux I can mount it in truecrypt (and other ones that recovered), but that's basically it. When I launch GParted I have unallocated drive. GParted Dev info: Device Information Model: ATA ST2000DL003-9VT1 Size: 1.82TiB Path: /dev/sda Partition table: unrecognized Heads: 255 Sectors/track: 63 Cylinders: 243201 Total Sectors: 3907029168 Sector size: 512 When I check information on unallocated space I get: File system: unallocated Size: 1.82TiB First sector: 0 Last sector: 3907029167 Total sectors: 3907029168 Warning: Can't have a partition outside the disk! Now the output of testdisc (Analyze): TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Partition Start End Size in sectors > 1 P Linux 13132 242 39 16353 233 8 51744768 2 E extended LBA 16807 223 1 243201 254 63 3637021626 No partition is bootable 5 L Linux 16807 223 57 20430 39 25 58191872 X extended 20430 70 1 243201 78 13 3578816632 Invalid NTFS or EXFAT boot 6 L HPFS - NTFS 20430 71 58 243201 78 13 3578816512 6 LNext Now fdisk: # fdisk -l /dev/sda Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00039cd0 Device Boot Start End Blocks Id System /dev/sda1 210980864 262725631 25872384 83 Linux /dev/sda2 270018504 3907040129 1818510813 f W95 Ext'd (LBA) /dev/sda5 270018560 328210431 29095936 83 Linux /dev/sda6 328212480 3907028991 1789408256 7 HPFS/NTFS/exFAT Now I would like to fix that to arrange partitions correctly, but I have no idea which tool is capable of fixing that (tried, a few, some of them offered fixing, but it was to risky at the moment - still backing up data).

    Read the article

  • Problems creating a functioning table

    - by Hoser
    This is a pretty simple SQL query I would assume, but I'm having problems getting it to work. if (object_id('#InfoTable')is not null) Begin Drop Table #InfoTable End create table #InfoTable (NameOfObject varchar(50), NameOfCounter varchar(50), SampledValue float(30), DayStamp datetime) insert into #InfoTable(NameOfObject, NameOfCounter, SampledValue, DayStamp) select vPerformanceRule.ObjectName AS NameOfObject, vPerformanceRule.CounterName AS NameOfCounter, Perf.vPerfRaw.SampleValue AS SampledValue, Perf.vPerfHourly.DateTime AS DayStamp from vPerformanceRule, vPerformanceRuleInstance, Perf.vPerfHourly, Perf.vPerfRaw where (ObjectName like 'Logical Disk' and CounterName like '% Free Space' AND SampleValue > 95 AND SampleValue < 100) order by DayStamp desc select NameOfObject, NameOfCounter, SampledValue, DayStamp from #InfoTable Drop Table #InfoTable I've tried various other forms of syntax, but no matter what I do, I get these error messages. Msg 207, Level 16, State 1, Line 10 Invalid column name 'NameOfObject'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'NameOfCounter'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'SampledValue'. Msg 207, Level 16, State 1, Line 10 Invalid column name 'DayStamp'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'NameOfObject'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'NameOfCounter'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'SampledValue'. Msg 207, Level 16, State 1, Line 22 Invalid column name 'DayStamp'. Line 10 is the first 'insert into' line, and line 22 is the second select line. Any ideas?

    Read the article

  • What will happen if on my DB server I'll run out of space?

    - by Noam
    I'm seeing a hugh difference of free disk space between df -h and du -sxh / I've understood in my question Resolving unix server disk space not adding up that du -sxh / is a better estimation as to when I will run out of disk space. Having said that, assuming in my case the above sentence will prove to be wrong and I will run soon out of disk space, what will happen? I assume the MySQL will fail INSERT queries, but other than that, will I just need to delete some files or will it be a problematic situation?

    Read the article

  • Creating Test Sites

    - by Robert
    I have a website running off site. When we hire someone I would like to create a test site (a copy of live site) for the new employee to tinker with. I will need to take fresh copies of the Files and Database (basically a snapshot) and allow them to access these copied files and database so they could edit and upload them to see the changes they made as if it was the live site Basically what is the best practice for creating a copy of a website for testing? Server is running Linux, PHP, mySQL

    Read the article

  • shell pipe behavior with MySQLDump

    - by unknown (google)
    I am using mysqldump for a large database (several GB) and import the result from a pipe, please see commands below, does it do incremental pipe, or wait until the first one finishes then import? is this a good way of importing large db across servers? I know you can export gz it, then pscp it then import. Quick alternative are welcome mysqldump -u root -ppass -q mydatabase | mysql -u root -ppass --host=xxx.xx.xxx.xx --port=3306 -C mydatabase

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >