Search Results

Search found 69002 results on 2761 pages for 'file rename'.

Page 759/2761 | < Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >

  • Installing gnome on Linode with Ubuntu 9.10 x64 - remote VNC/RDP

    - by Kieran Benton
    Hi, I'm a self confessed Linux newbie, having lived and worked mostly within the Windows world for most of my life. I'm making the effort to try moving my virtual host from a Windows box to a Linode instance to try and better learn Linux, and one of the uses I occasionally have with my current Windows VPS is to RDP into it and browse the internet. I'm aware that this is probably not best practice (from either performance or security), and most of the time I will be learning from the shell, but I do occasionally need to boot into a GUI. Because of this, I'd like the ability within my Ubuntu installation on Linode to start/stop Windows X and Gnome at will after SSHing in (startx? gdm?), so I've tried: apt-get install ubuntu-desktop Reboot startx But I've got an error that no amount of googling has helped me with so far, which I'm assuming is something to do with the fact the box is headless and X needs some more configuration that is beyond me at the moment: root@local:~# startx hostname: Unknown host xauth: creating new authority file /root/.Xauthority xauth: creating new authority file /root/.Xauthority xauth: (argv):1: bad display name "local.kieranbenton.com:0" in "list" command xauth: (stdin):1: bad display name "local.kieranbenton.com:0" in "add" command X.Org X Server 1.6.4 Release Date: 2009-9-27 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-23-server x86_64 Ubuntu Current Operating System: Linux local.kieranbenton.com 2.6.31.5-x86_64-linode9 #1 SMP Mon Oct 26 19:35:25 UTC 2009 x86_64 Kernel command line: root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ro Build Date: 26 October 2009 05:19:56PM xorg-server 2:1.6.4-2ubuntu4 (buildd@) Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Dec 2 15:50:23 2009 Primary device is not PCI (==) Using default built-in configuration (21 lines) (EE) open /dev/fb0: No such file or directory (EE) No devices detected. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Can anyone give me any pointers as to how to go from here and get VNC/RDP setup? (RDP would be preferred?). Thanks.

    Read the article

  • Installing gnome on Linode with Ubuntu 9.10 x64 - remote VNC/RDP

    - by Kieran Benton
    Hi, I'm a self confessed Linux newbie, having lived and worked mostly within the Windows world for most of my life. I'm making the effort to try moving my virtual host from a Windows box to a Linode instance to try and better learn Linux, and one of the uses I occasionally have with my current Windows VPS is to RDP into it and browse the internet. I'm aware that this is probably not best practice (from either performance or security), and most of the time I will be learning from the shell, but I do occasionally need to boot into a GUI. Because of this, I'd like the ability within my Ubuntu installation on Linode to start/stop Windows X and Gnome at will after SSHing in (startx? gdm?), so I've tried: apt-get install ubuntu-desktop Reboot startx But I've got an error that no amount of googling has helped me with so far, which I'm assuming is something to do with the fact the box is headless and X needs some more configuration that is beyond me at the moment: root@local:~# startx hostname: Unknown host xauth: creating new authority file /root/.Xauthority xauth: creating new authority file /root/.Xauthority xauth: (argv):1: bad display name "local.kieranbenton.com:0" in "list" command xauth: (stdin):1: bad display name "local.kieranbenton.com:0" in "add" command X.Org X Server 1.6.4 Release Date: 2009-9-27 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-23-server x86_64 Ubuntu Current Operating System: Linux local.kieranbenton.com 2.6.31.5-x86_64-linode9 #1 SMP Mon Oct 26 19:35:25 UTC 2009 x86_64 Kernel command line: root=/dev/xvda xencons=tty console=tty1 console=hvc0 nosep nodevfs ramdisk_size=32768 ro Build Date: 26 October 2009 05:19:56PM xorg-server 2:1.6.4-2ubuntu4 (buildd@) Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Dec 2 15:50:23 2009 Primary device is not PCI (==) Using default built-in configuration (21 lines) (EE) open /dev/fb0: No such file or directory (EE) No devices detected. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Can anyone give me any pointers as to how to go from here and get VNC/RDP setup? (RDP would be preferred?). Thanks.

    Read the article

  • uploading php files into my root folder and sending spam

    - by Mustafa Oenal
    i do not understand how but someone is uploading a php file into the public_html directory of my CentOS 6 server like statisticsuQPo.php this php file gives me "linux10+cfcd208495d565ef66e7dff9f98764da" and it is sending spam mail's without end. i have remove the file maybe 10 times but i do got it back every day. how can i solve this problem? is there anything wrong with my apache configuration?

    Read the article

  • an attempt was made to logon, but the network logon service was not started

    - by RodH257
    We've recently had a catastrophic raid failure on our servers, which were being backed up with shadow protect. After 3 days of copying I finally got our file server back in a VM. As we used a 'virtualboot' for the file server in the meantime, I effectively had two copies of the server on the network at once. In order to copy back the files that changed, I tried to rename the file server, and change its IP address (I should also mention, the file server is a backup DC). When I renamed it, it came up with an error, so I rebooted. Now I can't login, it says "an attempt was made to logon, but the network logon service was not started" I don't care if I have to recreate the Vm and reinstall windows, but I would like to be able to get the files off this VM. How can I get access to it?

    Read the article

  • Shorten Emacs timeout of ~/.emacs read

    - by user35042
    My ~/.emacs start-up file is stored in my AFS home directory. Often when I login to a linux machine I will forget to renew my AFS credentials before attempting to edit a local (non-AFS) file with Emacs. When this happens Emacs will attempt to load ~./emacs but cannot because it is in AFS space where I do not have access. Eventually (after a minute or so) Emacs will give up trying to load ~./emacs and continue. But waiting for Emacs to timeout is annoying (typing Ctrl-Z does not seem to interrupt this timeout). I want to shorten the amount of time that Emacs waits before giving up. I have tried the suggestion at this site which says to put the following code in the site-start.el file: (with-timeout (4) (load remote-.emacs)) However, when I do that I get the error Error in init file: Symbol's value as variable is void: remote-\.emacs whenever starting Emacs. How can I shorten this timeout?

    Read the article

  • ssh tunnel error "ssh_exchange_identification: Connection closed by remote host"

    - by Jacob Ewing
    I'm trying to use an ssh tunnel from my office machine to my home machine, and get an error when I try to use it. What I'm doing is starting one shell like so: ssh -gL 12345:my.home.domain:22 my.home.domain This is giving me a proper shell, no problem. What I normally do then is ssh to my home machine through this office machine, like so: ssh -p 12345 127.0.0.1 This has always worked for me, until last week, when I set up a new system on my home machine (switching from Ubuntu to Debian). Now I get an error. I can still open up my initial ssh connection, but when I try to use that tunnel, I get (on the office machine) this error: ssh_exchange_identification: Connection closed by remote host Also, when that happens, the open shell that I have the tunnelling set up through gets this line spat out at it: channel 3: open failed: connect failed: Connection timed out At which point, I'm at a loss. If any more info is needed, I'll be happy to post it. ============= further to that ============== After fiddling around further, I've found that I'm getting a different response from the server (my home machine that is) when I try to telnet in on the various ports. If I try: telnet my.home.domain 22 I get this back: Trying <my ip address>... Connected to <my domain>. Escape character is '^]'. SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2 Which is what I would expect. After setting up the tunnel though, and then telnetting to that, I see this response: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. ============== and further still ================== As per kbulgrien's suggestion, here is the output from the client machine with the -v option: ssh -vp 24600 127.0.0.1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 127.0.0.1 [127.0.0.1] port 24600. debug1: Connection established. debug1: identity file /home/jacob/.ssh/id_rsa type -1 debug1: identity file /home/jacob/.ssh/id_rsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_dsa type -1 debug1: identity file /home/jacob/.ssh/id_dsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • selfextracting archive with 7zip via cmd.exe

    - by creativz
    Hello, I downloaded 7za.exe and did this to compress a file.exe: 7za.rar a -mx9 archive file.exe Now I want to create a selfextracting archive (.exe), so I tried this: 7za.rar a -mx9 -sfx archive file.exe But I get the following error message: can't find specified sfx module Can anyone please tell me how to make a selfextracting .exe with 7zip via cmd.exe? Thanks!

    Read the article

  • flask, lighttpd with fastcgi can't get it to work

    - by kurojishi
    i'm tring to deploy a simple flask script to a lighttpd server with fastcgi. this is the configuration file for lighttpd builded using the flask documentation http://flask.pocoo.org/docs/deploying/fastcgi/#configuring-lighttpd server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) var.home_dir = "/var/lib/lighttpd" var.socket_dir = home_dir + "sockets/" ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" fastcgi.server = ("weibo/callback.fcgi" => (( "socket" => "/tmp/weibocrawler-fcgi.sock", "bin-path" => "/var/www/weibo/callback.fcgi", "check-local" => "disable", "max-procs" => 1 )) ) url.rewrite-once = ( "^(/weibo($|/.*))$" => "$1", "^(/.*)$" => "weibo/callback.fcgi$1" and this is the script i'm tring to run: #!/home/nrl/kuro/weiboenv/bin/python from flup.server.fcgi import WSGIServer from callback import app if __name__ == '__main__': WSGIServer(application, bindAddress='/tmp/weibocrawler-fcgi.sock').run() but i have this error testing the configuration file i get this error: 2013-07-02 17:15:42: (configfile.c.912) source: lighttpd.conf.new line: 52 pos: 1 parser failed somehow near here: weibo/callback.fcgi$1 when i remove the urlrewrite i get these errors in the log even if the daemon start: 2013-07-02 16:25:53: (log.c.166) server started 2013-07-02 16:25:53: (mod_fastcgi.c.1104) the fastcgi-backend fcgi.py failed to start: 2013-07-02 16:25:53: (mod_fastcgi.c.1108) child exited with status 2 fcgi.py 2013-07-02 16:25:53: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2013-07-02 16:25:53: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2013-07-02 16:25:53: (server.c.938) Configuration of plugins failed. Going down.

    Read the article

  • ImageMagick failing to convert to JPG

    - by johnui
    Hi: We recently installed the latest version of ImageMagick onto our Linux server. I seem to be having issues performing the most basic of tasks. I am running this command line: /usr/bin/convert /location/to/source/design.ai /location/to/save/output.jpg Unfortunatly is saves design.jpg as an illustrator file (if I rename the file to output.ai it opens). Even if I do this: /usr/bin/convert /location/to/source/design.ai -rotate 90 /location/to/save/design.jpg It rotates the file and saves again as an illustrator document. This happens with all filetypes (e.g. png, bmp, etc...) It appears ImageMagick cannot figure out what I want it converted to and just saves as the same file type. Any ideas on fixing this? Regards: John

    Read the article

  • Windows7 hardlink over two different drives

    - by Sandro
    I am trying to create a hardlink on my C drive that points to a file on my D drive. I open up a terminal with Administrator privileges and try the following: C:\Users\sandro>mklink /H _vimrc D:\sandro-desktop\.vimrc The error that I get is: The system cannot move the file to a different disk drive. When I try a softlink I get the issue that for some reason changes to the link contents aren't reflected on the targeted file. Thank you!

    Read the article

  • How to actually defragment a JFFS2 filesystem

    - by Julie in Austin
    I have searched all over the Internet, including on a number of StackExchange forums, for a workable method for defragmenting a JFFS2 filesystem and cannot find an answer. The system in question has a 256MB NAND flash part. It is being accessed as a MTD device which is divided into three partitions. The third partition is where the root file system is being stored as a JFFS2 file system. The issue is that writes to the root file system have non-deterministic performance due to the usual issues of the JFFS2 garbage collector deciding to run at the worst possible times. When that happens, the product is hung for some unknown length of time while the garbage collector (and pdflush) run. Changing the file system isn't an option. The solution needs to be something that can run during off-hours that after having been run results in more predictable write performance. Right now I am working on a program that will attempt to force the garbage collector to run, then delete the file with the hope that all of the freed nodes are suddenly more readily available and make writes perform better. Thoughts?

    Read the article

  • Unable to install Windows Installer 4.5 on Windows Server 2008

    - by Tim Trout
    One of the prerequisites for installing SQL Server 2008 Express R2 is Windows Installer 4.5. I have a couple of WS2008 machines I'm prepping, so I downloaded the appropriate version of the file (Windows6.0-KB942288-v2-x86.msu) to our file server and tried isntalling it on both machines. On both machines I get the cryptic 0x80070003 error: "the system cannot find the file specified", but it does not show which file it cannot find. I don't get this when I try to install the Windows XP version on one of my XP machines. Any clues as to what I might be missing or doing wrong? One Technet help forum suggested I try installing the "System Update Readiness Tool", but this installer also fails for the same error code.

    Read the article

  • Error when restoring database (Windows 7 test environment)

    - by Undh
    I have a windows 7 operating system as a test environment. I have SQL Server EE installed with two instances, named as test and production. I took a full backup from AdventureWorks database from test instance and I tried to restore it into the production instance: RESTORE DATABASE [testikanta] FROM DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008TESTI\MSSQL\Backup\AdventureWorks.bak' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10 GO I got an error saying: Msg 3634, Level 16, State 1, Line 1 The operating system returned the error '32(failed to retrieve text for this error. Reason: 15105)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008TESTI\MSSQL\DATA\AdventureWorks_Data.mdf'. Msg 3156, Level 16, State 8, Line 1 File 'AdventureWorks_Data' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008TESTI\MSSQL\DATA\AdventureWorks_Data.mdf'. Use WITH MOVE to identify a valid location for the file. Msg 3634, Level 16, State 1, Line 1 The operating system returned the error '32(failed to retrieve text for this error. Reason: 15105)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008TESTI\MSSQL\DATA\AdventureWorks_Log.ldf'. Msg 3156, Level 16, State 8, Line 1 File 'AdventureWorks_Log' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008TESTI\MSSQL\DATA\AdventureWorks_Log.ldf'. Use WITH MOVE to identify a valid location for the file. Msg 3119, Level 16, State 1, Line 1 Problems were identified while planning for the RESTORE statement. Previous messages provide details. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. Where's the problem? I'm running these instances as on local machine adminstrator (SQL Server services are running with the same account).

    Read the article

  • notepad sql Unicode and Non Unicode

    - by RBrattas
    Hi, I have a Microsoft Notepad flate file with data and Vertical Bar as column delimiter. I get following message: cannot convert between unicode and non-unicode string data types It seems it is my nvarchar(max) that creates my problem. I changed to varchar(max); but still the same problem. How do I insert my flate file into my SQL Server 2005? And in the SQL Server 2005 import and export wizard the flate file source advanced tab the OutputColumnWith is 50. Will that say my flate file column is max 50? I hope not because my column is more then 50... Thank you, Rune

    Read the article

  • uwsgi_params not in nginx

    - by Halit Alptekin
    Firstly I setup nginx and uwsgi via apt-get. And,I add the line to nginx conf file(/etc/nginx/conf.d/default.conf) like below line; server { listen 80; server_name <replace with your hostname>; #Replace paths for real deployments... access_log /tmp/access.log; error_log /tmp/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8889; } } I had a error; Starting nginx: [emerg]: open() "/etc/nginx/uwsgi_params" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:11 configuration file /etc/nginx/nginx.conf test failed If I add uwsgi_params file from uwsgi's source;I had a simple error. Thanks

    Read the article

  • gnome-terminal -- get path by draging files into terminal

    - by user62046
    I use colinux, see http://www.colinux.org/ ; I used the ubuntu image (ubuntu without desktop environment) and install the packages gdm ?x-window-system-core and gnome-core and update to 10.04. the default user ( which is the only one user) is root. When I try to drag a file into gnome-terminal, I don't get the path of this file, instead the image of this file just came back to the original place. Anyone knows why this happened?

    Read the article

  • Dovecot install: what does this error mean?

    - by jamie
    I have postfix and dovecot installed on CentOS 6 (linode) along with MySQL. The table and user is already set up, postfix installed fine, but dovecot gives me this error in the mail log: Warning: Killed with signal 15 (by pid=9415 uid=0 code=kill) The next few lines say this: Apr 7 16:13:35 dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled) Apr 7 16:13:35 dovecot: config: Warning: NOTE: You can get a new clean config file with: doveconf -n > dovecot-new.conf Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:1: protocols=pop3s is no longer supported. to disable non-ssl pop3, use service pop3-login { inet_listener pop3 { p$ Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:5: ssl_cert_file has been replaced by ssl_cert = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:6: ssl_key_file has been replaced by ssl_key = <file Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:8: namespace private {} has been replaced by namespace { type=private } Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:24: add auth_ prefix to all settings inside auth {} and remove the auth {} section completely Apr 7 16:13:35 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:25: auth_user has been replaced by service auth { user } I am following directions for the install on CentOS 5 with changes in the dovecot.conf file from different sources specific to CentOS 6. So the dovecot.conf file might not be correct, but there is no good source I have found yet for making dovecot install correctly. Can anyone tell me what the error above means? The terminal does not give any message as to start OK or FAIL. When I issue the service dovecot start command, it says: Starting Dovecot Imap: and nothing more.

    Read the article

  • Windows Update for auto-complete filename in Explorer

    - by Stan
    OS: Windows XP SP3 Seems there is a Windows Update improves Explorer interface, adding auto-complete filename feature in open file dialogue, and when press F2 to rename file, the cursor will at filename(cursor here).txt instead of old way - filename.txt(cursor here). Does anyone know which update should I download? Thanks.

    Read the article

  • Video Editing Software Recommendation

    - by Lee
    I want to get some recommendations for Video Editing software. I need the software to do the following: Encode to multiple formats, .avi, .wma, DVD format, etc. Most of all we need to encode a file to .flv format. Ability to burn the file to DVD. Ability to perform video editing on the file. Easy-to-use. Specially, for the beginning video editor.

    Read the article

  • gnome-terminal — get path by draging files into terminal

    - by user62046
    I use colinux, see http://www.colinux.org/ ; I used the ubuntu image (ubuntu without desktop environment) and install the packages gdm ?x-window-system-core and gnome-core and update to 10.04. the default user ( which is the only one user) is root. When I try to drag a file into gnome-terminal, I don't get the path of this file, instead the image of this file just came back to the original place. Anyone knows why this happened?

    Read the article

  • How do I make my internal dns forward requests to a given server

    - by ankimal
    We have a DNS server internally that looks up IP addresses for all internal hosts and connects to root dns servers for all other domains (the rest of the internet). Here is my config options { listen-on port 53 { 127.0.0.1;any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {192.168.1.0/24; 127.0.0.1; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view “internal” { // What the home network will see match-clients { 127.0.0.1;any; }; match-destinations { 127.0.0.1;any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "internal_zones.conf"; }; We need to tweak this to go to our ISPs dns, x.y.z.w instead of the root dns servers if the host cannot be resolved internally. Config: Fedora 10/Bind 9.5.2

    Read the article

  • Modifying y-axis label in pdf Diagram

    - by Andrea
    Hi all, I have mislabelled the y-axis in my pdf graph that I generated from an Matlab file. Unfortuantely, I cant find anymore the data that I used to generate the graph. So I am wondering if there is an free tool that allows me to rename the y-axis in my pdf diagram? Are there any free tools available that could be helpful or is this something "impossible" to do? Many thanks for your help, Andrea

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • Error while installing ltsp server package in fedora 12

    - by paragjain16
    Hi, i am using fedora 12, while i was installing ltsp(Linux terminal server project) server package, it told me that some more packages need to be installed with it as well, while downloading the packages i got the following error - Local Conflict between packages Test Transaction Errors: file /usr/share/man/man5/dhcp-eval.5.gz from install of dhcp-12:4.1.1-5.fc12.i686 conflicts with file from package dhclient-12:4.1.0p1-12.fc12.i686 file /usr/share/man/man5/dhcp-options.5.gz from install of dhcp-12:4.1.1-5.fc12.i686 conflicts with file from package dhclient-12:4.1.0p1-12.fc12.i686 i also deleted all the dhcp packages from man5 directory, even then it is giving the same error msg. please help me with it

    Read the article

  • mysql UDF : fopen = permission denied

    - by lindenb
    Hi All, this is question I already asked on SO but I wonder if this could be a SysAdmin problem. I'm trying to create a mysql UDF function , this function calls "fopen/fclose" to read a flat file stored in /data. But using errno (yes, I know it is bad in a MT program...) I can see that the function cannot open my file: "Permission denied" I tried to do a chmod -R 755 /data (as well as 777, chown -R mysql:mysql /data etc...) but it didn't change anything. when I copied the flat file to /tmp : OK, my UDF was able to 'fopen' the file. I'm puzzled. currently , I've got: drwxrwxrwx 4 pierre root 4096 2010-05-26 16:51 /data drwxrwxrwx 3 pierre root 4096 2010-05-18 09:41 /data/dir1 drwxrwxrwx 3 pierre root 4096 2010-05-18 09:41 /data/dir1/dir2 drwxrwxrwx 4 pierre root 4096 2010-05-18 10:27 /data/dir1/dir2/dir3 -rw-r--r-- 1 pierre root 50685268 2005-12-10 00:01 /data/dir1/dir2/dir3/myfile.txt Any idea ?

    Read the article

< Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >