Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 120/2058 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Creating a ssh tunnel to transfer files?

    - by Vincent
    For me, networks are a very "opaque" thing, and even with reading a lot of tutorial about SSH, I do not understand how to create a basic tunnel to transfer my files. The configuration is the following : My Computer --[Internet]--> Bridge Machine --[Local Network]--> Final Machine Currently I do the following : 1) Connect to the Bridge Machine with : ssh -X [email protected] 2) Connect to the Final Machine with : ssh -X username@finalmachine 3) I copy the address of files I need (for example .../mydirectory) 4) Then I deconnect from the finalmachine with : exit 5) I copy the files to the bridge : scp -r username@finalmachine:/.../mydirectory . 6) I deconnect from the bridge with : exit 7) I copy the files to my machine : scp -r [email protected]:/.../mydirectory . Which is quite complicated. My question is basic : how to simplify this using a SSH tunnel ? (and please explain me the signification of each command line you write, to understand what each line really do and to avoid to use it like a magical thing. Furthermore if some ports number are used, explain me if I can pick a completely random number or if I have to choose a specific one.)

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Process files in a folder that haven't previously been processed

    - by Paul
    I have a series of files in a directory that I need to carry an action out on using a script. Once the action is done, then I want to keep a log that the file has been processed, so that the next time the script is run, it does not attempt to carry out the action again. So lets say I can find all the files that should be processed like this: for i in `find /logfolder -name '20*.log'` ; do process_log $i echo $i >> processedlogsfile done So I have a file containing the logs I have processed, and my goal would be to modify the for loop such that these processed logs are not processed a second time. Doing a manual scan each time seems inefficient, particularly as the processedlogfiles gets bigger: if grep -iq "$i" processdlogfiles ; then continue; fi It would be good if these files could be excluded when setting up the for loop. Note that the OS in question is a linux derivative, a managment appliance, with a limited toolset (no attr command for example) and so no way to install additional utilities (well it is possible but not an option). Most common bash shell commands are available though. Also, the filenames and locations of the processed files must remain where they are - they can't be altered to reflect their processed status

    Read the article

  • Move postfix maildir files from one mail server to another

    - by Tauren
    I have a new mail server configured as described in this howto: http://howtoforge.com/virtual-users-domains-postfix-courier-mysql-squirrelmail-ubuntu-9.10 I also have an ancient mail server configured very similarly (using the same HOWTO, just for Fedora Core 6, if I recall correctly). Earlier today I had to switch from the old server to the new one, and the old one is no longer online. However, after I had migrated everything and switched it all over, I discovered a bunch of undelivered mail in the queue. It got delivered to the local mailboxes on the old server, so now there are a bunch of messages on it that I'd like to move to the new server. The new server has already received new messages, so I need to merge the files together somehow. For each user with an email of [email protected], there are files like this on both servers: /home/vmail/customer.com/username/maildirsize /home/vmail/customer.com/username/courierpop3dsizelist /home/vmail/customer.com/username/new/1271481177.Vca01I6006bM580357.mailhost.mydomain.com Can I simply copy the hundreds of files in the various new directories on the old server to the corresponding new directories on the new server? Will the maildirsize and courierpop3dsizelist files get updated automatically, or do I need to do something to update them?

    Read the article

  • Missing boot files in Windows 8

    - by Alex F. Sherman
    I had a partition with Windows 8 Release Preview, Windows' System Reserved partition and the empty space of the beginning of disk. I moved two partitions to the beginning of disk using Ubuntu Live CD and GParted. After that, the Windows Loader didn't boot and throw an error about missing files. I fixed it using the commands: bootsect /nt60 sys /force /mbr bootrec /rebuildbcd bootrec /fixboot bootrec /fixmbr When I used "Automatic repair" option from "Advanced boot" menu, it throw an error like: Windows can't fix your boot problems. For more information see file C:\Windows\System32\LogFiles\Srt\SrtTrail.txt In this file I found a description of the system repair actions and at the end of file: Boot status indicates that the OS booted successfully. Now, when I use the Advanced boot menu from Windows 8 (PC settings - General - Advanced startup) I receive an error: Restart your PC to try again. It looks like something didn't load correctly. Restarting might fix the problem. If this happens more than once, you might also be able to find help by searching online for the specific error code. Erorr code: 0x8007090. 0x80070490 is the error code ERROR_NOT_FOUND. What are the missing boot files and how can I restore them? List of files in System Reserved Partition: B:\bootmgr B:\BOOTNXT B:\Boot\BCD B:\Boot\BCD.LOG B:\Boot\BCD.LOG1 B:\Boot\BCD.LOG2 B:\Boot\BOOTSTAT.DAT B:\Boot\Fonts B:\Boot\memtest.exe B:\Boot\qps-ploc B:\Boot\Resources B:\Boot\Resources\bootres.dll and many *.mui and *.ttf files.

    Read the article

  • Recovering data from mangodb raw files

    - by Jin Chen
    we use mongodb for our database and set the replset(two servers), but we mistakenly deleted some raw files that under /path/to/dbdata on both servers, after we used tool to get back the deleted files(we ran the extundelete on both server and mix the result together), like database.1, database.2 etc. we could not start the mongod, it raised the following error when starting mongod or executing mongodump, here is the console output: root@mongod:/opt/mongodb# mongodump --repair --dbpath /opt/mongodb -d database_production Thu Aug 21 16:22:43.258 [tools] warning: repair is a work in progress Thu Aug 21 16:22:43.258 [tools] going to try and recover data from: database_production Thu Aug 21 16:22:43.262 [tools] Assertion failure isOk() src/mongo/db/pdfile.h 392 0xde1b01 0xda42fd 0x8ae325 0x8ac492 0x8bd8e0 0x8c1c51 0x80e345 0x80e607 0x80e6a4 0x6db92a 0x6dc1ff 0x6e0db9 0xd9e45e 0x6ccdc7 0x7f499d856ead 0x6ccc29 mongodump(_ZN5mongo15printStackTraceERSo+0x21) [0xde1b01] mongodump(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda42fd] mongodump(_ZNK5mongo7Forward4nextERKNS_7DiskLocE+0x1a5) [0x8ae325] mongodump(_ZN5mongo11BasicCursor7advanceEv+0x82) [0x8ac492] mongodump(_ZN5mongo8Database19clearTmpCollectionsEv+0x160) [0x8bd8e0] mongodump(_ZN5mongo14DatabaseHolder11getOrCreateERKSsS2_Rb+0x7b1) [0x8c1c51] mongodump(_ZN5mongo6Client7Context11_finishInitEv+0x65) [0x80e345] mongodump(_ZN5mongo6Client7ContextC1ERKSsS3_b+0x87) [0x80e607] mongodump(ZN5mongo6Client12WriteContextC1ERKSsS3+0x54) [0x80e6a4] mongodump(_ZN4Dump7_repairESs+0x3a) [0x6db92a] mongodump(_ZN4Dump6repairEv+0x2df) [0x6dc1ff] mongodump(_ZN4Dump3runEv+0x1b9) [0x6e0db9] mongodump(_ZN5mongo4Tool4mainEiPPc+0x13de) [0xd9e45e] mongodump(main+0x37) [0x6ccdc7] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f499d856ead] mongodump(__gxx_personality_v0+0x471) [0x6ccc29] assertion: 0 assertion src/mongo/db/pdfile.h:392 Thu Aug 21 16:22:43.271 dbexit: Thu Aug 21 16:22:43.271 [tools] shutdown: going to close listening sockets... Thu Aug 21 16:22:43.271 [tools] shutdown: going to flush diaglog... Thu Aug 21 16:22:43.271 [tools] shutdown: going to close sockets... Thu Aug 21 16:22:43.272 [tools] shutdown: waiting for fs preallocator... Thu Aug 21 16:22:43.272 [tools] shutdown: closing all files... Thu Aug 21 16:22:43.273 [tools] closeAllFiles() finished Thu Aug 21 16:22:43.273 [tools] shutdown: removing fs lock... Thu Aug 21 16:22:43.273 dbexit: really exiting now my env: 1) Debian 3.2.35-2 x86_64(it's a XEN virtual machine) 2) mongodb 2.4.6 and we did not delete the .0 and .ns files we tried to create a new database with the same name and copy these db.ns and db.2, db.3 to the new db, we still met the same error. is there any way to check the valid of raw .ns and datafiles, and how to recover the database?

    Read the article

  • Automated Scanning for Corrupted Archive Files

    - by Synetech inc.
    Hi, I have all kinds of files that I have downloaded from everywhere over the years scattered around my hard drives. I’m in the process of trying to organize them all and have run into a problem. Sometimes a file was not downloaded correctly (this was a big issue when Chromium first came out) and is thus corrupt. For media files, this is relatively simple to determine (it requires actually opening and examining the file). For executables or other binaries, it is relatively difficult (executables may or may not crash, other binaries could be completely unknown). Archive files (eg zip, rar, 7z, exe, ace, etc.) however should be pretty simple, they have a built-in corruption detection facility. My problem now is that I really don’t want to open and test each and every single archived file throughout my drive; that would be a nightmare. I’m looking for a utility that can automate the process. Is there a program that can scan archive files on a drive and list the ones that are corrupt? Thanks a lot.

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • Utility to LOGICALLY compare two xml files?

    - by Matthew
    Right now we are attempting to build golden configurations for our environment. One piece of software that we use relies on large XML files to contain the bulk of its configuration. We want tot ake our lab environment, catalog it as our "golden configuration" and then be able to audit against that configuration in the future. Since diff is bytewise comparison and NOT logical comparison, we can't use it to compare files in this case (XML is unordered, so it won't work). What I am looking for is something that can parse the two XML files, and compare them element by element. So far we have yet to find any utilities that can do this. OS doesn't matter, I can do it on anything where it will work. The preference is something off the shelf. Any ideas? Edit: One issue we have run into is one vendor's config files will occasionally mention the same element several times, each time with different attributes. Whatever diff utility we use would need to be able to identify either the set of attributes or identify them all as part of one element. Tall order :)

    Read the article

  • Lost Permission on Files using wrong chmod syntax Centos 5.5

    - by alloutfallout
    Hello, I was trying to remove write permissions on an entire directory, and I used the incorrect command: chmod 644 -r sites/default I meant to type chmod -R 644 sites/default The result was this: chmod: cannot access `644': No such file or directory $ ls -als sites total 24 4 drwxr-xr-x 5 user group 4096 Jan 11 10:54 . 4 drwxrwxr-x 14 user group 4096 Jan 11 10:11 .. 4 drwxr-xr-x 4 user group 4096 Jan 5 01:25 all 4 d-w------- 3 user group 4096 Jan 11 10:43 default 4 -rw-r--r-- 1 user group 1849 Apr 15 2010 example.sites.php I fixed the permissions on the default folder with $ chmod 644 sites/default But, the following ls shows a all the files with red backgrounds and question marks. I can't access any files unless I am root. $ ls -als sites/default total 0 ? ?--------- ? ? ? ? ? . ? ?--------- ? ? ? ? ? .. ? ?--------- ? ? ? ? ? default.settings.php ? ?--------- ? ? ? ? ? files ? ?--------- ? ? ? ? ? settings.php When I log in as root, I can edit all of the files, and their permissions appear correctly. I do not know how to undo the damage caused by using -r with chmod instead of -R. Any Suggestions?

    Read the article

  • Using psftp to upload and download files

    - by macha
    Hello I am trying to upload and download files from my desktop to my server. Now after some search I did download psftp. I used to use filezilla earlier, but I cannot install it on my desktop due to a few reasons. Since psftp (similar to putty) is just an executable for file transfer. So now after going through this link http://www.math.tamu.edu/~mpilant/math696/psftp.html. I understood that put and get are two commands I would use to download and upload files. Now when I logon to the server and say get filename, it actually is throwing back an error "local: unable to open filename". I tried that with other files too, and I end up getting the same error. The psftp.exe file is on my desktop. The process that I am using is I double click the .exe file open "servrname" cd /path/where/files/are get "filename" And I get this error "local: unable to open filename". Am I making a mistake or is it a problem with this executable?

    Read the article

  • Compress a folder of PDF files into separate zip archives

    - by Panrubius
    I wanted to take a folder full of PDF files and create a number of separate zip files, after following the advice on this question everything worked *almost*perfectly. Here's what happened: When I issued this command in Terminal: zip -s 5m -r ~/Desktop/invoices ~/Desktop/Invoices/ Everything worked really well, in that I got 11 ZIP files of approximately 5 MB each; placed in the folder specified. However, the files they outputted were named as follows: invoices.z01 invoices.z02 invoices.z03 invoices.z04 invoices.z05 invoices.z06 invoices.z07 invoices.z08 invoices.z09 invoices.z10 invoices.zip So as you can see only invoices.zip has been named correctly. I could go through and rename them one by one, but seriously, if we start doing that then what in the name of Evolution are computers for?! Now, I am also aware that I'm relatively new to the Terminal; so I could be making a very silly mistake somewhere. If that's the case, please be patient :-) Any help would be greatly appreciated. One last note: I'm quadriplegic so I would like to avoid GUI applications as much as possible, I use voice recognition software you see this working in the Terminal is much much easier.

    Read the article

  • Write permissions on uploaded files - PHP & Linux

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Permanently deleting files on Mac OS

    - by Jonik
    A while back, as relatively new Mac OS X user, I was surprised to learn that you cannot easily delete files. Directly, that is, without moving them to the trash first. On Windows and Linux this can obviously be done with ease, but not so on the Mac. I noticed this when trying clear up files from a USB memory stick — removing the files ("move to trash") does not free up space; that happens only after emptying the whole system-wide Trash. Not particularly convenient! (It seems stupid to have to empty the whole trashcan just to make some space on the USB stick. There might be gigabytes of stuff in there, and this sort of defeats its purpose - what if you'd actually need to restore something from the trash some day.) So, what's your way of getting around this? Have you bought a 3rd party application like RAW Trash for $16.95 just to delete files, or do you diligently empty the trashcan whenever needed? Or did I miss something? Also, can you convince me that this is actually the way it should be — that users shouldn't be able to fiddle with the filesystem easily? :)

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • ActiveDirectory user files aren't being shared? [on hold]

    - by Ryan
    I'm taking a class in college and my current project is to setup Active Directory. So I have two VMs, one is Windows Server 2008 R2 and the other is Windows 7. I setup the domain team15.net (this is the FQDN for the forest) on the W2008 server machine. And then I connected the Windows 7 machine to the team15.net domain, and now I can login to the administrator account on the W2008 machine by using TEAM15\Administrator for the username. However, any files that I add in Windows 7 while logged into TEAM15\Administrator are NOT shown in the Windows 2008 machine. Is this normal? It seems like any files I add to this user in the domain only exists for the machine that I'm currently using. Is it possible to change this so all files for all users are synced to all computers in the domain? If not then what are alternatives? I noticed that back in high school we also used domains but there was a shared drive E:\ that you had to store your files because if you put them on in the desktop instead then they would suddenly disappear once you logged out and back in. How can I setup a shared drive? and which computer would provide the storage for this drive?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Changing open-files-limit in mysql 5.5

    - by davidv
    I'm having an issue with mysql 5.5 running on Ubuntu 12.04 with the open-files-limit parameter. I recently noticed some problems due to the 1024 limit, and actually the main system limit was set to 1024, so I modified /etc/security/limits.conf with the following: * soft nofile 32000 * hard nofile 32000 root soft nofile 32000 root hard nofile 32000 After that I check the ulimit value for root and even for mysql user, both returned the new value: 32000, so I assume the change has already been done. I also changed the value at the my.cnf file, setting open-files-limit to 24000, like this: open-files-limit = 24000 Now comes the odd part, when I restart the mysql service and check the open_files_limit variable, it returns that it's still set to 1024, so I'm having the same problems that before (obviously), I tried to use open-files-limit instead open_files_limit in the my.cnf config file, same result, BUT if I override the service command to start the service and start only using mysqld (no additional parameters), the service starts and when I check the parameter it returns 32000... I don't know where it's taking that value from, as it's not set at my.cnf and it's not being given through command line, at least, not for myself. Any ideas about why it's not working the change and how to solve it the normal way (launching it through service...)?

    Read the article

  • PHP files are downloaded, not executed in UserDir on Apache

    - by Fabian
    We're running a webserver using Debian 6.0.3 with Apache 2, we recently upgraded from Debian 5 to 6. Since then php scripts in the user directories (using mod_userdir) have stopped working, they are downloaded instead of being executed. There is also a website using php outside of the user directories, and that one continues to work fine, so PHP seems to generally work on the server. I tested it with several PHP files, among the a simple phpinfo file that works fine on the main site, but is just downloaded when copying it to one of the user directories. The php files and the directory containing them are executable for everyone. The option in the Apache php5.conf that by default disables PHP in the user directories, is commented out, so the php5.ini looks like this: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> # To re-enable php in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule> </IfModule> We restarted Apache after changing this. I'm running out of ideas now what the problem could be, and I don't know how I could really determine which problem is preventing those php files from being executed. Any ideas on how I can solve this? Update: Strangely, PHP seems to work fine in subfolders of user directories, so if I copy a PHP file from /home/user/public_html/ to /home/user/public_html/test/ it suddenly works.

    Read the article

  • Windows 7: Moving Program Files location during install using unattend.xml

    - by Shevek
    I am planning on using an unattend.xml to create a Windows 7 Ultimate 64-bit setup with Users and ProgramData on a 2nd drive. I have found many samples of how to do this (see below). However I would also like to move Program Files to a 3rd drive as well. i.e.: C:\Windows [SSD] D:\Users [HDD1] D:\ProgramData [HDD1] P:\Program Files [HDD2] P:\Program Files (x86) [HDD2] I have found that this was possible using unattend.txt in XP but all documentation or examples I find about Win 7 only mention Users and ProgramData, not Program Files. Is this possible using an answer file? Sample unattend.xml for Users and ProgramData: <?xml version="1.0" encoding="utf-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="oobeSystem"> <component name="Microsoft-Windows-Shell-Setup" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" processorArchitecture="amd64"> <FolderLocations> <ProfilesDirectory>D:\Users</ProfilesDirectory> <ProgramData>D:\ProgramData</ProgramData> </FolderLocations> </component> </settings> </unattend>

    Read the article

  • How to back up a database with thousands of files

    - by Neal
    I am working with a Fedora server that runs a customized software package. The server software is quite old, and its database consists of 1,723 files. The database files are constantly changing - they continually grow and changes are not necessarily appended to the end. So right now, we currently back up every 24 hours at midnight when all users are off of the system and the database is in an internally consistent state. The problem is that we have the potential to lose an entire day's worth of work, which would be unrecoverable. So I'd like to know if there is a way to take some sort of an instantaneous snapshot of these database files that we could back up every 30 minutes or so. I've read about Linux LVM snapshots, and am thinking that I might be able to do accomplish the goal by taking a snapshot, rsync'ing the files to a backup server, then dropping the snapshot. But I've never done this before,so I don't know if this is the "right" fix. Any ideas on this? Any better solutions? Thanks!

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • Tortoise SVN, find all non-added files?

    - by John Isaacks
    I am using Tortoise SVN on Windows. I have a deep directory structure (using cakephp). After creating several files I then went through and +added them. The new files show a ? next to them and a + next to them after set to add so it is easy to tell which files still need to be added. this is the same with folders. However if a new file is inside an old folder there will still be the green checkmark. You won't have any indicator telling you there is a new file. So I forgot to add one before a commit. This wouldn't be so bad as I could just add it and recommit. However, I also made this a tag, so I had recommit to the tag and the trunk. Anyways this wouldn't have been an issue if there was an indicator on existing folders that contain new files. Is there a way to set this up?

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >