Search Results

Search found 72319 results on 2893 pages for 'file explorer'.

Page 975/2893 | < Previous Page | 971 972 973 974 975 976 977 978 979 980 981 982  | Next Page >

  • MAMP Pro .xip.io, fixing urls with htaccess

    - by user3540018
    I've got all my websites set up with MAMP Pro. For instance, I got it set up, so when I go to example.com, the browser displays the website that's set up on my iMac. Now, I wanna get MAMP Pro to work so I can view my website on my other computers/devices (which are all hooked up on the same network.) So far all I had to do is check the checkbox "via Xip.io (LAN only)", and now I can view my website on my other computers/devices within my LAN by simply going to example.com.10.0.1.13.xip.io. Problem is, whenever I'm on this other computer/device, when I click on the links, I get 404 error. ie. whenever I go to example.com/news, I get the 404. But when I go to example.com.10.0.1.13.xip.io/news, THEN I get the right page. So in order to solve my problem I need to rewrite the urls. So whenever someone clicks on a link ie. goes straight to example.com/news, he'll go to example.com.10.0.1.13.xip.io/news. I don't want to change all the links in my MySQL file, but I believe I can do it simply with the htaccess file. I've opened the htaccess file and added the last two lines, but it just doesn't work. <IfModule mod_rewrite.c> RewriteEngine On # Send would-be 404 requests to Craft RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/(favicon\.ico|apple-touch-icon.*\.png)$ [NC] RewriteRule (.+) index.php?p=$1 [QSA,L] RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com.10.0.1.13.xip.io/$1 [R=permanent,L] </IfModule> Or perhaps I don't need to change the htaccess file, is there something that I could be missing in the MAMP Pro settings, or perhaps a MAMP extension that I need?

    Read the article

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • How to make multiple Excel files open in ONE window/instance of Excel 2003 in Win 7

    - by Mark
    I'm running Excel 2003 on my new Windows 7 machine. (There is also a Excel 2010 starter pre installed that I do not use). I'm a heavy user of Excel. I use it all day every day. I often have 10 or 15 sheets open and once and many of them have cell references to each other. I also have a macro file that keeps all my short cuts. On my old W2K machine when I clicked on a .xls file or a shortcut to one to it would open that file in the existing instance of Excel. This is as it should be. I would have many files open, in only one "window" or instance of Excel. All the files could interact with each other, the cross file lookups worked, my macros worked and I could switch between workbooks with CTRL Tab or CTRL F6, I could move tabs from one workbook to another. On the new W7 machine clicking on an icon opens a NEW INSTANCE of Excel every time. This is terribly frustrating. None of my connecting spreadsheets work anymore. My macros don't work. I can't connect files, I can't move tabs. I'm stuck. I can't do my work! I can still open files in one instance by doing a CTRL-O and navigating, but I need to my files to work on a click. I'm guessing this is a flaw in the registry files, possibly because of the starter Excel 2010 that came preloaded on my new machine. Can you walk me through a registry edit to fix this bug? Is there an easier way than a registry edit?

    Read the article

  • Configuring CakePHP on Hostgator

    - by yaeger
    I have absolutely no idea what I am doing wrong here. I have followed just about every guide there is with installing cakephp on shared hosting and I am still having problems. I have also started over each time when following a guide. Maybe someone can help me out here as I am out of options. Here is my current setup: / app webroot vendors lib cake public_html .htaccess index.php plugins I have configured the index.php file in the public_html to point to the correct files. I have also done this in the index.php file located in webroot folder. I am getting an Internal 500 server error and it says to check my logs for what the error specifically is. However there are no logs being generated. I removed the .htaccess file from the public_html folder and I get the following errors: Warning: require(/app/webroot/index.php) [function.require]: failed to open stream: No such file or directory in /home/user/public_html/index.php on line 40 Fatal error: require() [function.require]: Failed opening required '/app/webroot/index.php' (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/user/public_html/index.php on line 40 line 40 is require APP_DIR . DS . WEBROOT_DIR . DS . 'index.php'; DS = "/" WEBROOT_DIR = "app" Anyone have any suggestions? I am at lost at what I am doing wrong.

    Read the article

  • VSFTPD - FTP over TLS - Upload stops after exactly 82k?

    - by Redsandro
    I installed a VSFTP daemon on a CentOS server, using a RSA certificate for logging in using explicit TLS. Now, I cannot upload more than 82k. With files under that limit, there is no problem. The FTP works like a charm. But as soon as a file reaches 82k with FileZilla (81,952 bytes to be exact), the transfer will stop, and the FTP client hangs until time out is reached. FTP client console: 15:10:21 Command: STOR jquery-1.7.2.min.js 15:10:21 Response: 150 Ok to send data. 15:11:21 Error: Connection timed out 15:11:21 Error: File transfer failed after transferring 82 KB in 60 seconds /var/log/vsftpd.log FTP command: Client "x.x.x.x", "STOR jquery-1.7.2.min.js" FTP response: Client "x.x.x.x", "150 Ok to send data." OK UPLOAD: Client "x.x.x.x", "jquery-1.7.2.min.js", 81952 bytes, 1.32Kbyte/sec FTP response: Client "x.x.x.x", "226 File receive OK." // NOT okay, file is bigger // No mention of error here I cannot find relevant info about this problem, apart from a possible problem with trans_chunk_size (not mentioned in default config), but I tried different sizes and it has no impact on the problem. trans_chunk_size=4096 trans_chunk_size=8192 trans_chunk_size=9999 Ofcourse, after every configuration change, I restarted the server: /etc/init.d/vsftpd restart What else can cause this? It's not the latest version, but it's the latest update within the repositories that has been deemed fit for enterprise usage: Package info: $ yum info vsftpd Loaded plugins: fastestmirror Installed Packages Name : vsftpd Arch : x86_64 Version : 2.0.5 Release : 24.el5_8.1 Size : 286 k Repo : installed Summary : vsftpd - Very Secure Ftp Daemon URL : http://vsftpd.beasts.org/ License : GPL Description: vsftpd is a Very Secure FTP daemon. It was written completely from scratch.

    Read the article

  • CHKDSK is unable to fix NTFS errors

    - by HackToHell
    After my PC shutdown due to power failure, I noticed several errors in EventViewer. The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume \Device\HarddiskVolume2. and The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:. So I forced a chkdsk check at startup, and it finds a stream of error, here is the output, it is smaller than the actual log, because, Event Viewer only seems to have this much, the same line was repeated thousands of times.Here is that line. Some clusters occupied by attribute of type 0x80 and instance tag 0x4 in file 0x198f2 is already in use. Deleting corrupt attribute record (128, "") from file record segment 104690. Attribute record of type 0x80 and instance tag 0x0 is cross linked Also, even after running CHKDSK, the same errors were being reported again so I ran CHKDSK another time and it still loops the same line above, without fixing the error. Can anyone tell me how to fix it?

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

  • Maximum limit of filepointer in php reached and not changeable

    - by mlaug
    I have a server with the current 5.3.x version installed. Since we are running a really simple and small server in php using sockets, that connects to a lot clients using sockets we need to raise the open file limit that has been already done on the server for the user, that runs the server #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 29879 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29879 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited and we compiled php with --enable-fd-setsize=8192 still we are getting [19-Nov-2012 09:24:23 Europe/Berlin] PHP Warning: socket_select(): You MUST recompile PHP with a larger value of FD_SETSIZE. It is set to 1024, but you have descriptors numbered at least as high as 1024. --enable-fd-setsize=2048 is recommended, but you may want to set it to equal the maximum number of open files supported by your system, in order to avoid seeing this error again at a later date. once in a while in our logs. Anyone knows who to configure the unix server and php correctly to have that working? I found a bug, but that is related to 2006 and marked as "not a bug" https://bugs.php.net/bug.php?id=37025&edit=1

    Read the article

  • Force Juniper-network client to use split routing

    - by craibuc
    I'm using the Juniper client for OSX ('Network Connect') to access a client's VPN. It appears that the client is configured to not use split-routing. The client's VPN host is not willing to enable split-routing. Is there a way for me to over-ride this configuration or do sometime on my workstation to get the non-client network traffic to by-pass the VPN? This wouldn't be a big deal, but none of my streaming radio stations (e.g. XM) work will connected to their VPN. Apologies for any inaccuracies in the terminology. ** edit ** The Juniper client changes my system's resolve.conf file from: nameserver 192.168.0.1 to: search XXX.com [redacted] nameserver 10.30.16.140 nameserver 10.30.8.140 I've attempted to restore my preferred DNS entry to the file $ sudo echo "nameserver 192.168.0.1" >> /etc/resolv.conf but this results in the following error: -bash: /etc/resolv.conf: Permission denied How does the super-user account not have access to this file? Is there a way to prevent the Juniper client from making changes to this file?

    Read the article

  • Excluding files from web logs

    - by Ray
    I originally tried this question on StackOverflow, but it was suggested that serverfault was a better choice. So, here it is... Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea? The purpose of doing this is to reduce the log file size and the amount of work the server has to do, to make the log file more manageable when I need to dig in to them for troubleshooting, and for my own curiosity. I realize that log file parsers can skip the junk, but I am interested in reducing the raw files, before parsing.

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • Windows, why 8 GB of RAM feel like a few MB?

    - by Desmond Hume
    I'm on Windows 7 x64 with 4-core Intel i7 and 8 GB of RAM, but lately it feels like my computer's "RAM" is located solely on the hard drive. Here is what the task manager shows: The total amount of memory used by the processes in the list is just about 1 GB. And what is happening on my computer for a few days now is that one program (Cataloger.exe) is continually processing large quantities of (rather big) files, repeatedly opening and reading them for the purposes of cataloging. But it doesn't grow too much in memory and stays about that size, about 90 MB. However, the amount of data it processes in, say, 30 minutes can be measured in gigabytes. So my guess was that Windows file caching has something to do with it. And after some research on the topic, I came across this program, called RamMap, that displays detailed info on a computer's RAM. Here is the screenshot: So to me it looks like Windows keeps in RAM huge amounts of data that is no longer needed, redirecting any RAM allocation requests to the pagefile on the hard drive. Even when I close Cataloger.exe, the RamMap reports the size of the mapped file as about the same for a long time on. And it's not just this particular program. Earlier I noticed that similar slowdown occurred after some massive file operations with other programs. So it's really not an exception. Whatever it is, it slows down the computer by like 50 times. Opening a new tab in Chrome takes 20-30 seconds, opening a new program can take up to a minute. Due to the slowdown, some programs even crash. So what do you think, is the problem hiding in file caching or somewhere else? How do I solve it?

    Read the article

  • Setup: Eclipse in Ubuntu with Apache2 and Subversion

    - by Ricalsin
    Trying to setup Eclispe. I am running ubuntu 10.10 (Maverick). Apache2.2.16 Subversion 1.6.12 The Eclipse help/about/installed software says: Eclipse Platform 3.5.2 Subclipse 1.0.0 Version Control with Subversion 1.1.1 The Subclips wiki I followed is here I have installed the libsvn-java app as discussed. I added the line "-Djava.library.path=/usr/lib/jni" to the eclipse.ini file I checked the Eclipse help/about/confirguration settings and both of these lines are listed: eclipse.vmargs=-Djava.library.path=/usr/lib/jni java.library.path=/usr/lib/jni I checked that those files are in those directories. Still, when I check the preferencesteamsvn an error dialog shows: Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl.1 in java.library.path Incompatible JavaHL library loaded 1.3.x or later required I followed the "Testing JavaHL libraries" troubleshooting section at the bottom of the wiki: I downloaded the tarbal and ran it in a folder on my desktop with no problems. Then, I followed the instructions and placed that file INSIDE the path (usr/lib/jni/testJavaHL) and ran it from there. There are 50 tests performed and each one of them came back with this same error (posting only one for brevity): 50) testCommitRevprops(org.tigris.subversion.javahl.BasicTests)java.io.FileNotFoundException: /usr/lib/jni/testJavaHL/local_tmp/greek_files/iota (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:209) at java.io.FileOutputStream.<init>(FileOutputStream.java:160) at org.tigris.subversion.javahl.WC.materialize(WC.java:70) at org.tigris.subversion.javahl.SVNTests.buildGreekFiles(SVNTests.java:303) at org.tigris.subversion.javahl.SVNTests.setUp(SVNTests.java:222) at org.tigris.subversion.javahl.RunTests.main(RunTests.java:111) FAILURES!!! Tests run: 50, Failures: 0, Errors: 50 Any ideas as to how/why the "local_tmp/greek_files/iota" is appended to the directory? I assume that's my problem.. I'm also having a problem with newrepository location = ...as the directory location of my svn repository is one level above the home directory - which is prepended to whatever I place in the dialog box - resulting in this error: svn: '/home/ricalsin/file:/home/svn' does not exist Thank you for any help.

    Read the article

  • Why does hiberfil.sys come back from the dead on Windows 7?

    - by Corey White
    I have Windows 7 running on a small (40GB) partition, with 4GB ram. This means that the hiberfil.sys file created by Hibernate takes up a significant portion of the available diskspace. I would like to remove it. I am aware that I can disable Hibernate and remove hiberfil.sys by entering powercfg -h off in an elevated command prompt. This works -- the file is immediately removed, and after doing so, the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key is (correctly) set to 0. However, the next time I reboot the PC, hiberfil.sys returns from the dead, Hibernate is reenabled, and that registry key has returned to 1. I'm pretty much at my wits' end with this. Almost everything I can find online related to removing the hiberfil.sys file simply suggests using powercfg to turn off hibernation, and that appears to work for just about everyone. But it just keeps coming back for me! (Like a vampire, sucking up my disk space.) I did find one other thread from someone who seems to have had the same issue, but none of the suggestions there worked for the original poster (or for me). Still, I have tried everything listed there, including: Disabling hybrid sleep Disabling Hibernate through the command prompt, through the Power Options GUI, and through both (in both orders) Manually changing the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key Pretty much everything else I can think of! I do want to reiterate that I have no problem removing the file -- that works great. It just comes back after every reboot. I'm about ready to throw in the towel and just run a script on login to disable Hibernate each time, even though that seems like a crazily hacky "solution" . . . but I was hoping someone here could suggest something else, first. Thanks!

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • Sync OneNote Notebooks to/on SkyDrive

    - by Sam
    I've got OneNote running on all computers in our house, using it all the time with several people and computers. The only drawback: I want to keep the copies of OneNote in sync without having to run a dedicated server myself. Right now one of my computers has a folder share, where all others sync to, but this is highly impractical since the computer is not always running. So my question is: is it possible to put the notebook files on a (private) SkyDrive Folder and have all the computers sync to there? This way all computers could keep in sync whenever they got access to the web. Can this be done? and, of course, How? [Update] Maybe I should not have taken knowledge about OneNote as granted: OneNote uses a propietary file format, but has a very good in-file-syncing, working on network shares. Generic 'just sync the complete file' won't be useful at all, because I'd just have 'file has changed on server and on client' conflicts all the time. The sync needs to know OneNote files and be able to sync the content - eg. OneNote itself needs to sync the files, not some generic sync tool.

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • Trouble with backslash characters and rsyslog writing to postgres

    - by Flimzy
    I have rsyslog 4.6.4 configured to write mail logs to a PostgreSQL database. It all works fine, until the log message contains a backslash, as in this example: Jun 12 11:37:46 dc5 postfix/smtp[26475]: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:\pmta\spool\B\00000414, status = ERROR_DISK_FULL in "DATA" (in reply to end of DATA command)) The above is the log entry, as written to /var/log/mail.log. It is correct. The trouble is that the backslash characters in the file name are interpreted as escapes when sent to the following SQL recipe: $template dcdb, "SELECT rsyslog_insert(('%timereported:::date-rfc3339%'::TIMESTAMPTZ)::TIMESTAMP,'%msg:::escape-cc%'::TEXT,'%syslogtag%'::VARCHAR)",STDSQL :syslogtag, startswith, "postfix" :ompgsql:/var/run/postgresql,dc,root,;dcdb As a result, the rsyslog_insert() stored procedure gets the following value for as msg: Vk0nYDKdH3sI: to=<[email protected], relay=----.---[---.---.---.---]:25, delay=1.5, delays=0.77/0.07/0.3/0.35, dsn=4.3.0, status=deferred (host ----.---[199.85.216.241] said: 451 4.3.0 Error writing to file d:pmtaspoolB The \p, \s, \B and \0 in the file name are interpreted by PostgreSQL as literal p, s, and B followed by a NULL character, thus early-terminating the string. This behavior can be easiily confirmed with: dc=# SELECT 'd:\pmta\spool\B\00000414'; ?column? -------------- d:pmtaspoolB (1 row) dc=# Is there a way to correct this problem? Is there a way I'm not finding in the rsyslog docs to turn \ into \\?

    Read the article

  • Can't write to samba share

    - by Tiddo
    I try to setup a samba file server, but whatever I do I can't get write access to work (reading works fine). This is my current situation: I have a local fileserver with 3 harddisks mounted at /mnt/share/disk<nr>. 2 of these use the ext4 filesystem, the third one is ntfs. This file server runs Fedora 18 32-bit. The root folders of these harddisks are owned by superman:superman, and testparm outputs the following: [global] workgroup = WORKGROUP netbios name = FILE_SERVER server string = Samba Server Version %v interfaces = lo, eth0, 192.168.123.191/8 log file = /var/log/samba/log.%m max log size = 50 unix extensions = No load printers = No idmap config * : backend = tdb hosts allow = 192.168.123. cups options = raw wide links = Yes [share] comment = Home Directories path = /home/share/ write list = superman, @users force user = superman read only = No create mask = 0777 directory mask = 0777 inherit permissions = Yes guest ok = Yes I've tried a lot to get this to work: the disk are chmodded to 777, I've tried turning off selinux, I've added the samba_share_t label to the disks and as can be seen in the above output I tried to make the smb config as permissive as I could, but still I cannot write to the share (tried from Windows 7 and another Fedora installation). What can I try to be able to write to the shares? EDIT: The replies I got so far are mostly concerned with the smb.conf. I have however tried a lot of different setup, ready made configs, and solutions to similar problems for the smb.conf file, so I suspect that the real problem is somewhere else.

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

< Previous Page | 971 972 973 974 975 976 977 978 979 980 981 982  | Next Page >