Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 483/2452 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Google App Engine Java app couldn't find javac ?

    - by Frank
    I'm learning to use Google App Engine, I installed it in Netbeans, the project works, but when I clicked on "Deploy To Google App Engine", I got the following error : Beginning server interaction for ... 0% Creating staging directory 5% Scanning for jsp files. 8% Compiling jsp files. 11% Compiling java files. Error Details: Apr 20, 2010 3:51:23 PM org.apache.jasper.JspC processFile INFO: Built File: \PayPal_Monitor.jsp java.lang.IllegalStateException: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Unable to update app: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Please see the logs [C:\Users\NM\AppData\Local\Temp\appcfg3946701335172983337.log] for further information. The file "javac.exe" is in : C:\Program Files (x86)\Java\jdk1.6.0_18\bin How can I add it to "java.home" ? I'm using Win Vista, and I tried to add it from "System - Environment Variables", but there is no "java.home" in there. Where can I find it ? Frank

    Read the article

  • How to detect if a file is PDF or TIFF ?

    - by eviljack
    Please bear with me as I've been thrown into the middle of this project without knowing all the background. If you've got WTF questions, trust me, I have them too. Here is the scenario: I've got a bunch of files residing on an IIS server. They have no file extension on them. Just naked files with names like "asda-2342-sd3rs-asd24-ut57" and so on. Nothing intuitive. The problem is I need to serve up files on an ASP.NET (2.0) page and display the tiff files as tiff and the PDF files as PDF. Unfortunately I don't know which is which and I need to be able to display them appropriately in their respective formats. For example, lets say that there are 2 files I need to display, one is tiff and one is PDF. The page should show up with a tiff image, and perhaps a link that would open up the PDF in a new tab/window. The problem: As these files are all extension-less I had to force IIS to just serve everything up as TIFF. But if I do this, the PDF files won't display. I could change IIS to force the MIME type to be PDF for unknown file extensions but I'd have the reverse problem. http://support.microsoft.com/kb/326965 Is this problem easier than I think or is it as nasty as I am expecting?

    Read the article

  • Bad performance function in PHP. With large files memory blows up! How can I refactor?

    - by André
    Hi I have a function that strips out lines from files. I'm handling with large files(more than 100Mb). I have the PHP Memory with 256MB but the function that handles with the strip out of lines blows up with a 100MB CSV File. What the function must do is this: Originally I have the CSV like: Copyright (c) 2007 MaxMind LLC. All Rights Reserved. locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, When I pass the CSV file to this function I got: locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, It only strips out the first line, nothing more. The problem is the performance of this function with large files, it blows up the memory. The function is: public function deleteLine($line_no, $csvFileName) { // this function strips a specific line from a file // if a line is stripped, functions returns True else false // // e.g. // deleteLine(-1, xyz.csv); // strip last line // deleteLine(1, xyz.csv); // strip first line // Assigna o nome do ficheiro $filename = $csvFileName; $strip_return=FALSE; $data=file($filename); $pipe=fopen($filename,'w'); $size=count($data); if($line_no==-1) $skip=$size-1; else $skip=$line_no-1; for($line=0;$line<$size;$line++) if($line!=$skip) fputs($pipe,$data[$line]); else $strip_return=TRUE; return $strip_return; } It is possible to refactor this function to not blow up with the 256MB PHP Memory? Give me some clues. Best Regards,

    Read the article

  • deep linking in Excel sheets exported to html

    - by pomarc
    hello everybody, I am working on a project where I must export to html a lot of Excel files. This is pretty straightforward using automation and saving as html. The problem is that many of these sheets have links to worksheets of some other files. I must find a way to write a link to a single inner worksheet. When you export a multisheet excel file to html, excel creates a main htm file, a folder named filename_file, and inside this folder it writes down several files: a css, an xml list of files, a file that creates the tab bar and several html files named sheetxxx.htm, each one representing a worksheet. When you open the main file, you can click the menu bar at the bottom which lets you select the appropriate sheet. This is in fact a link, which replaces a frame content with the sheetxxx.htm file. When this file is loaded a javascript function that selects the right tab gets called. The exported files will be published on a web site. I will have to post process each file and replace every link to the other xls files to the matching htm file, finding a way to open the right worksheet. I think that I could add a parameter to the processed htm file link url, such as myfile.htm?sh=sheet002.htm if I want to link to the second worksheet of myfile.htm (ex myfile.xls). After I've exported them, I could inject a simple javascript into each of the main files which, when they are loaded, could retrieve the sh parameter with jQuery (this is easy) and use this to somehow replace the frSheet frame contents (where the sheets get loaded), opening the right inner sheet and not the default sheet (this is what I call deep linking) mimicking what happens when a user clicks on a tab. This last step is missing... :) I am considering different options, such as replacing the source of the $("frSheet") frame after document.ready. I'd like to hear from you any advice on what could be the best way to realize that in your opinion. any help is greately appreciated, many thanks.

    Read the article

  • Opening URL in New Tab doesn't work in existing, programmatically-opened New Window (Firefox)

    - by seth
    I am building a web app, for myself, to control some servers on my home network, and discovered what I think is very odd behavior in Firefox. If you open a pop-up, via javascript, in Firefox, is it then impossible to open a new tab, via javascript in that pop-up? If not impossible, how do you do it? Given a clean, default Firefox 3.6.3 installation... If I open a page in Firefox and then call var my_window = window.open('http://www.google.com','_blank','top=10'); A brand new "pop-up" window opens. However, if instead I call var my_window = window.open('http://www.google.com'); A get a new tab. HOWEVER... If I call the first version var my_window = window.open('http://www.google.com','_blank','top=10'); And then in the new "pop-up" that opens, I call var my_window = window.open('http://www.google.com'); It opens a new tab in the original window, not a new tab in the pop-up. This seems very odd, and not intuitive at all. Why would the call in the pop-up open a tab in the "parent" window?

    Read the article

  • Access denied from another thread

    - by Lobuno
    Hello! In a program I span a thread ("the working thread"). Hera I copy some files write some data to a database and eventually, delete some other files or directories. Everything works fine. The problem is now, that I decided to move the deleting operation to some other thread. So the working thread now copies the files or directories, writes to the database, and , if there is a need to delete some other files this thread spans another thread and that second thread deleted the needed files or directories. The problem is that,the deletion used to work 100% when done in the working thread, now when the same is done in the secondary thread, I sometimes get an "Access denied" error and the files cannot be deleted. And no, the working thread is definitely NOT acceding the files and directories to delete at this moment. Sometimes (but not always) the main thread is impersonating some user, so if needed , the deleting thread is also running under impersonation just to grant the needed permissions to delete the files, so that should not be the problem. Anybody has a clue why this could be happening?

    Read the article

  • SQL query to calculate running group counts on time-phased data

    - by spong
    I have some data, like this: BUG DATE STATUS ---- ---------------------- -------- 9012 18/03/2008 9:08:44 AM OPEN 9012 18/03/2008 9:10:03 AM OPEN 9012 28/03/2008 4:55:03 PM RESOLVED 9012 28/03/2008 5:25:00 PM CLOSED 9013 18/03/2008 9:12:59 AM OPEN 9013 18/03/2008 9:15:06 AM RESOLVED 9013 18/03/2008 9:16:44 AM CLOSED 9014 18/03/2008 9:17:54 AM OPEN 9014 18/03/2008 9:18:31 AM RESOLVED 9014 18/03/2008 9:19:30 AM CLOSED 9015 18/03/2008 9:22:40 AM OPEN 9015 18/03/2008 9:23:03 AM RESOLVED 9015 19/03/2008 12:27:08 PM CLOSED 9016 18/03/2008 9:24:20 AM OPEN 9016 18/03/2008 9:24:35 AM RESOLVED 9016 19/03/2008 12:28:14 PM CLOSED 9017 18/03/2008 9:25:47 AM OPEN 9017 18/03/2008 9:26:02 AM RESOLVED 9017 19/03/2008 12:30:30 PM CLOSED Which I would like to transform into something like this: DATE OPEN RESOLVED CLOSED ---------------------- -------- -------- -------- 18/03/2008 9:08:44 AM 1 0 0 18/03/2008 9:12:59 AM 2 0 0 18/03/2008 9:15:06 AM 1 1 0 18/03/2008 9:16:44 AM 1 0 1 18/03/2008 9:17:54 AM 2 0 1 18/03/2008 9:18:31 AM 1 1 0 18/03/2008 9:19:30 AM 1 0 2 18/03/2008 9:22:40 AM 2 0 2 18/03/2008 9:23:03 AM 1 1 2 18/03/2008 9:24:20 AM 2 1 2 18/03/2008 9:24:35 AM 1 2 2 18/03/2008 9:25:47 AM 2 2 2 18/03/2008 9:26:02 AM 1 3 2 19/03/2008 12:27:08 PM 1 2 3 19/03/2008 12:28:14 PM 1 1 4 19/03/2008 12:30:30 PM 1 0 5 28/03/2008 4:55:03 PM 0 1 5 28/03/2008 5:25:00 PM 0 0 6 i.e. keeping running counts of bugs with each status. This is easy enough to code up using cursors, but I'm wondering if any of you SQL gurus out there can help with a query to achieve this? Ideally for mysql, but I'm curious to see anything that will work.

    Read the article

  • unresolved token/symbol in MC++ wrapper class calling native code

    - by rediVider
    I'm new to MC++ and have basically no idea what's going on yet. In trying to get this working i've determined many things that don't work, i'm just looking for one of the ways that will work. I have a mc++ class as follows that seems to have to be a "ref" class to allow me to see any methods/properties. public ref class EmCeePlusPlus { static void Open(void) { Testor* t = new Testor(); Testor::Open(); }; }; extern public class Testor { public: Testor() { }; static void Open(void) { int x = 3; int xx = cli_lock(x); }; }; Now, the only reason i created the class Testor, and moved the call to cli_open to it, is because i was getting a unresolved external symbol if i put the same call in the ref class. In this current code, however, I get an uresolved token error and unresolved symbol error ONLY if i have the call to Testor::Open(). If that line is commented then it compiles fine. As it is I get the errors below. cli_lock() is native code that is able to be called externally by other native DLLs with not problems. Any ideas where i should be looking? error LNK2028: unresolved token (0A000056) "extern "C" int __cdecl cli_lock(int)" (?cli_lock@@$$J0YAHH@Z) referenced in function "public: static void __cdecl Giga::Testor::Open(void)" (?Open@Testor@Giga@@$$FSAXXZ) error LNK2019: unresolved external symbol "extern "C" int __cdecl cli_lock(int)" (?cli_lock@@$$J0YAHH@Z) referenced in function "public: static void __cdecl Giga::Testor::Open(void)" (?Open@Testor@Giga@@$$FSAXXZ)

    Read the article

  • Generating ul from array with php fails

    - by Toni Michel Caubet
    Given a $files array I am trying to generate a list, like this: <ul class=""> <? for($i=0;$i < count($files); $i++) { $text = str_replace("http://domain.com/files/uploads/", "", $files[$i]); $file = $files[$i]; $notme = sesion()>0 && $obj['id'] != sesion(); ?> <li> <a target="_blank" href="<?=$file?>"><?=$text?></a> <? if($notme){ ?> <span class="add_to_files" data-file="<?=$file?>">Agregar al gestor</span> <? } ?> </li> <? } ?> </ul> This is the output: Please note how the span html is wrong, <ul class=""> <li> <a href="http://domain.com/files/uploads/388400967232883.jpg" target="_blank">388400967232883.jpg</a> <span target="_blank" href="http://domain.com/files/uploads/388400967232883.jpg" url"="" data-file="&lt;a class=" class="add_to_files">http://domain.com/files/uploads/388400967232883.jpg"&gt;Agregar al gestor</span> </li> </ul> Any idea why?

    Read the article

  • BAD ARCHIVE MIRROR using PXE BOOT method

    - by omkar
    i m trying to automatically install UBUNTU on a client PC by using the method of PXE BOOT method....my Objectives are below:- i m following the steps given in this link installation using PXE BOOT INSTALL 1:-the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. 2:-the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server i have installed DHCP3-server,Apache2 and TFTP for helping me with the installation. i have nearly achieved my first objective,i m able to boot my client using the files stored in the server,but during the installation stage it is asking me to "CHOOSE A MIRROR of UBUNTU ARCHIVE".i gave the server's IP address and the path in the server where the files are located but then too its giving me error "BAD ARCHIVE MIRROR". so is it possible that instead of downloading all the files from the internet and storing them on my disk , can i use the files which comes with the UBUNTU-CD, and how to store this files in what format (should i zip them ) on the disk. secondly i am also generating the ks.cfg which i wanted to give to the client for automatic installation of the OS ,so how should the configuration file be given to the installation process.

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • Using dropbox / symbolic link combo successfully

    - by wim
    In the past I have kept some files on dropbox by copying them into my ~/Dropbox folder on Ubuntu. I don't want to move the original files into Dropbox synch folder or muck around with my directory structure. Then I have found I was using dropbox more and more, and wasting a lot of space this way by duplication of data. I use a small SSD locally for OS, any other data is kept on mounted shares from my NAS. I found I could successfully get files up to the cloud by using symbolic links like: ln -s /some/mounted/share/dir ~/Dropbox/dir And dropbox would carry on and sync those files remotely whilst only using up the space of the symbolic link locally. This worked well for me for a few weeks, until I turned on my laptop one day and saw '421 files have been removed from your dropbox' notification. They were still there in the original mounted share, but the symbolic links I'd made were completely gone for some reason. What did I do wrong? It is possible the share could have become unmounted, but I didn't expect this would cause all my files to be deleted from the cloud could it? How can I 'share' files on my dropbox in this way without the danger of the originals being modified from remotely?

    Read the article

  • Syncronization between folders MAC OS Lion

    - by Andre Carvalho
    I have an iMac at home and I use a Macbook pro for work. I also have a time capsule at home containing my main folder with my main files. I use it as a NAS besides the Time Machine backup tool. I have several personal files I need to be accessing both at home and at work. My wife, who works at home, uses sometimes the same .XLS files and .DOC files I might have used during my day at work, away from home. My question is: Is there a software, or tool that a I can use to sync my iMac and my MB Pro folders? Remembering that: There might be a chance that my wife and I have changed the same files during the day, so the files would have to be merged so none of the information added by either me or my wife would be lost. The software/tool that would be installed on the MB Pro would need to mount the Time Capsule volume so it could locate the main folder on it. It has to be done automatically when my MB is at home ( with a schedule option ); I have tested some softwares like synctwofolders and Chronosync but none fulfilled all my needs. The first couldn't mount the Time Capsule Volume and didn't have the many schedule options. I really liked Chronosync, but it doesn't merge the files. When it detects a conflict ( for instance: my wife changed a .DOC file on the iMAC and I changed the same file on the MB it asks you to choose which version you want to keep instead of allowing you simply to merge them ). I don't have much experience with automator or scripts but maybe you can give me a hand with that.

    Read the article

  • NIS user not being added to NIS group

    - by Brian
    I have set up a NIS server and several NIS clients. I have a user and a group on the NIS server like so: /etc/passwd: myself:x:5000:5000:,,,:/home/myself:/bin/bash /etc/group: fishy:x:3001:otheruser,etc,myself,moreppl I imported the users and groups on the NIS client by adding +:::::: to /etc/passwd and +::: to /etc/group. I can log in to the NIS client, but when I run groups, fishy is not listed. But getent group fishy shows that it was imported correctly and lists me as a member. And if I do sudo su - myself, then suddenly groups says I am in the group! I also had nscd installed, and the groups worked correctly for a while. It seemed like after being logged in for a while, I would silently be dropped out of the group. If I restarted nscd and logged in again, then the groups worked correctly...for a while. There are no UID or GID conflicts with local users or groups. Update: Contents of /etc/nsswitch.conf: passwd: compat group: compat shadow: compat hosts: files nis dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis aliases: nis files

    Read the article

  • Opening port 80 in router has no results

    - by Ricardo Pieper
    A friend of mine has a ADSL modem and I need to forward some ports. I have already forwarded the 1521 port (Oracle) and it's working fine. Now I need to forward the port 80. I already set up his IIS bindings to this port, and also forwarded the port like this video shows: https://www.youtube.com/watch?v=DLKD-fyexoo So I think I did everything correctly. The local IP address is also the same as the machine where the IIS server is running. I'm sorry, but I can't post images since i don't have 10 points :( Somehow I can't forward this port, yougetsignal.com keeps saying that the door is closed. When I try to open the port, the Control Panel says me that I have to access the control panel in the 8080 port, because the 80 port will be open. Ok, that's fine. But I'm still able to access it in the 80 port, and when I try to access it in the 8080 port, it doesn't work. I'm trying it with the TPLINK 8816, but I also tried to open it in the Opticom DsLink 279, and it didn't worked (using another machine), I got the exact same results. He has a dynamic IP address, but he is also using No-ip, so I can always access his Oracle database in a certain static address. The 1521 port is open. I also tried to disable the firewall in Windows, but that makes no sense to me, since the router doesn't really open the port 80. Clearly I'm missing something. I have never done it in my life, so I dont know how to proceed. Restarting the router was the first I did, no results. I'm accessing his laptop through TeamViewer, so I'm testing the port outside his local network. Edit: My ISP says that they allow to open ports, and the 1521 port is opened. What could I do to open the 80 port?

    Read the article

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: Index of / Index of / Links/ ..... .... But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • How to simply remove everything from a directory on Linux

    - by Tometzky
    How to simply remove everything from a current or specified directory on Linux? Several approaches: rm -fr * rm -fr dirname/* Does not work — it will leave hidden files — the one's that start with a dot, and files starting with a dash in current dir, and will not work with too many files rm -fr -- * rm -fr -- dirname/* Does not work — it will leave hidden files and will not work with too many files rm -fr -- * .* rm -fr -- dirname/* dirname/.* Don't try this — it will also remove a parent directory, because ".." also starts with a "." rm -fr * .??* rm -fr dirname/* dirname/.??* Does not work — it will leave files like ".a", ".b" etc., and will not work with too many files find -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr find dirname -mindepth 1 -maxdepth 1 -print0 | xargs -0 rm -fr As far as I know correct but not simple. find -delete find dirname -delete AFAIK correct for current directory, but used with specified directory will delete that directory also. find -mindepth 1 -delete find dirname -mindeph 1 -delete AFAIK correct, but is it the simplest way?

    Read the article

  • Do you known a reputable backup software that can capture ONLY file system structure + attributes, WITHOUT file content

    - by bogdan
    Is there, on Windows, a reputable backup software out there capable of capturing ONLY a file system's directory and file structure, along with each item's attributes, WITHOUT capturing the actual file content (all files should be zero-length in the backup). I thoroughly searched the web for a solution and wasn't able to find one. Scenario when this would be very useful: I have a large drive with a huge amount of files. If the drive dies, I don't care so much about the content in these files (I can always download this content again from the Internet at any time) but I do care HUGELY about the names of the files that were on it, possibly also about their MD5 hashes and other classic file attributes (especially created-date / modified-date). The functionality I need is present to an extent in "media"/file cataloging software (i.e. whereisit) and, to a lesser extent, in a Total Commander set of extensions (DiskDir, DiskDirExtended). The huge drawback with cataloging software is that it's not designed to store previous versions of each item (AFAIK) and, most importantly, it has very weak content backup capabilities. I managed to think of a hack but I hope there's some backup software out there that already has this capability and I just failed to find it, thus this question. The hack: RoboCopy could be used with /CREATE (CREATE directory tree and zero-length files only) or /COPY (what to COPY for files) without the D=Data flag, to clone a directory structure into one where all files are zero-length but have the desired attributes. Then I would backup the cloned directory structure with a reputable backup software. I would really love to avoid a hack like this one, if possible. Thanks, Bogdan

    Read the article

  • Is it possible to change "working directory" of XeTeX?

    - by Herbert Sitz
    Using XeTeX there are many working files that get created in process of producing the pdf, and they litter the directory where my main .tex file is. Is it possible to change the working directory of XeTeX so that it stores all these scratch files in some other directory, out of the way? There is a previous question on Superuser.com that discusses a utility that cleans up the working files by deleting them after they're produced: http://superuser.com/questions/95712/how-to-avoid-littering-ones-tex-directories-with-intermediate-files That solution doesn't work for me since I'm using XeTeX, but also it seems like it would be preferable to simply be able to designate a "scratch" directory where all working files are saved. I haven't been able to find any info on how to do it though. Is there a way? (My question is prompted partly because of the fact that I often work with files in a directory that is shared using DropBox, so it creates a lot of unnecessary traffic if files are getting created and destroyed willy nilly. I don't know if it affects speed in any way, but the idea of having a separate working directory that is not shared/replicated by DropBox would be a cleaner solution, even if I could use the method suggested in the earlier thread.)

    Read the article

  • issue in installing postgresql 9.3.4 on Windows server 2003 x64

    - by randydom
    Hello i really did all what i know to install the PostgreSQL 9.3.4 on my windows 2003 server x64, but i'm always stopped with this error : please see the error : http://oi57.tinypic.com/s4tb8i.jpg I really don't know what to do , if i click OK then when i go to the windows services list i don't find the PostgreSQL service so i can't Start the service . can any one please help me to install it correctly . PS: i've followed all steps in the : wiki.postgresql.org/wiki/Troubleshooting_Installation many thanks . here's the installer log * where i get " Failed to initialise the database cluster with initdb " : Called IsVistaOrNewer()... 'winmgmts' object initialized... Version:5.2 MajorVersion:5 Ensuring we can write to the data directory (using cacls): Executing batch file 'rad22ADE.bat'... processed dir: C:\Program Files\PostgreSQL\9.2\data Executing batch file 'rad22ADE.bat'... The files belonging to this database system will be owned by user "Administrator". This user must also own the server process. The database cluster will be initialized with locale "English_United States.1252". The default text search configuration will be set to "english". fixing permissions on existing directory C:/Program Files/PostgreSQL/9.2/data ... initdb: could not change permissions of directory "C:/Program Files/PostgreSQL/9.2/data": Permission denied Called Die(Failed to initialise the database cluster with initdb)... Failed to initialise the database cluster with initdb Script stderr: Program ended with an error exit code Error running cscript //NoLogo "C:\Program Files\PostgreSQL\9.2/installer/server/initcluster.vbs" "NT AUTHORITY\NetworkService" "postgres" "****" "C:\Program Files\PostgreSQL\9.2" "C:\Program Files\PostgreSQL\9.2\data" 5432 "DEFAULT" 0 : Program ended with an error exit code Problem running post-install step. Installation may not complete correctly The database cluster initialisation failed. Creating Uninstaller Creating uninstaller 25% Creating uninstaller 50% Creating uninstaller 75% Creating uninstaller 100% Installation completed Log finished 05/02/2014 at 04:04:04

    Read the article

  • Updating and deleting java (red hat / centos) (new)

    - by JochemTheSchoolKid
    I am a total noob with linux. So please explain clearly if you have a solution for me. I have an VPS and I want to update JAVA. I found a guide on the Java site which says: rpm -e < package_name I searched for the packages: [root@srv1 ~]# rpm -qa | grep java java_cup-0.10k-5.el6.x86_64 java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 Than I tried to do the delete command [root@srv1 ~]# rpm -e java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 error: Failed dependencies: java-gcj-compat is needed by (installed) java_cup-1:0.10k-5.el6.x86_64 java-gcj-compat >= 1.0.70 is needed by (installed) sinjdoc-0.5-9.1.el6.x86_64 What should I do now? Removing has worked thanks to the answer below Problem two! Now am I installing this package from java [root@srv1 java]# rpm -ivh jre-7u9-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/rt.pack jsse.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/jsse.pack charsets.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/charsets.pack localedata.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/ext/localedata.pack plugin.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/plugin.pack javaws.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/javaws.pack deploy.jar... Error: Could not open input file: /usr/java/jre1.7.0_09/lib/deploy.pack Can someone help me again with this?

    Read the article

  • Photoshop CS4. How do I make sure my color stays the same in my different .psd files? (could be RGB

    - by Kris
    I asked 2 photoshop experts I know but they haven't got a clue because all my settings are exactly the same, in both files. (except the RGB type !! I'm not sure. Please read on) I use RGB color, 72DPI, 8 bits / channel. No adjustments (filters, like greyscale, etc ...) are selected / used. The layers are both normal, and opacity and fill are 100% (yes, in both files). I took two screenshots, and you can see the difference: http://www.flickr.com/photos/30465871@N05/4623864297/ http://www.flickr.com/photos/30465871@N05/4624469754/ Both colors are ff795d, but that doesn't matter, any color I use gives me the same problem: they both look different. Now, I know the CMYK settings (see screenshots) are different, but when the settings are the same the color changes. Why is this happening and how do I solve this problem? My guess is I'm working with different a type of RGB. It's sRGB IE61966 - 2.1 in the file I created (file info raw data) but I can't find that in the file that started with a screenshot. If that's the problem, how do I change / convert the RGB, once the file is already open? Thank you.

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >