Search Results

Search found 9816 results on 393 pages for 'named conf'.

Page 310/393 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Building PHP For MacOS

    - by Eray
    I was using XAMPP and decided to uninstall it and use MacOS' in-built apache and php modules. But while uninstalling XAMPP I deleted /usr/bin/php files and other PHP-CLI files accidentally. And I decided to install newest version of PHP (5.5.12) instead of rebuilding current version (5.4.24). Downloaded it and unzip. After this executed this command as mentioned at this guide. ./configure '--with-apxs2=/usr/sbin/apxs' '--enable-cli' '--with-config-file-path=/etc' '--with-zlib=/usr' '--enable-bcmath' '--with-bz2=/usr' '--enable-calendar' '--disable-cgi' '--with-curl=/usr' '--enable-dba' '--enable-ndbm=/usr' '--enable-exif' '--enable-fpm' '--enable-ftp' '--with-gd' '--enable-gd-native-ttf' '--enable-mbregex' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pear' '--with-pdo-mysql=mysqlnd' '--with-mysql-sock=/var/mysql/mysql.sock' '--with-tidy' '--enable-wddx' '--with-xmlrpc' '--enable-zip' make make install When i check phpinfo() , it's still version 5.4.24 . This line from my httpd.conf LoadModule php5_module libexec/apache2/libphp5.so /usr/libexec/apache2/libphp5.so coming from old version and i couldn't ind libphp5.so for new version. There is no libphp5.so file inside modules dir. How can i use new PHP build with Apache ? UPDATE Results of php -v command . PHP 5.5.12 (cli) (built: May 27 2014 05:17:21) Copyright (c) 1997-2014 The PHP GroupZend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies

    Read the article

  • Moving web files to /home/user/ gives permission denied using apache

    - by Maaz
    I recently created some linux users on my machine and their respective directories were created in the following manner /home/my_user so I decided to treat each user as one of my websites. I moved all my website files over to this directory like so /home/my_user/public_html/. I edited the virtual host in my httpd.conf and changed the root directory folder so this is how that looks <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/home/my_user/public_html" ServerName mywebsite.com ServerAlias www.mywebsite.com ErrorLog "/var/log/httpd/mywebsite/error_log" CustomLog "/var/log/httpd/mywebsite/access_log" common </VirtualHost> Now this virtual host configuration was working perfectly fine with my older document root path that was located at /var/www/html/mywebsite/public_html but after changing that to what it is right now, I am getting a permission denied error. But I followed the instructions here: http://stackoverflow.com/questions/14427808/you-dont-have-permission-error-in-apache-in-centos Even after following the above instructions, when I run the following command: sudo -u apache ls /home/my_user/public_html The server responds with ls: cannot open directory /home/my_user/public_html: Permission denied Even so, I do not get a permissions denied error when I try to access my site any more, however, now I am redirected to the default page of apache instead of my website. I am not exactly sure what's wrong any more, if anyone has an idea, it would be great if you guys could help out!

    Read the article

  • Routing and authenticating all access through squid

    - by Knight Samar
    Hi, I want to route all Internet access in my network through a Squid proxy server and authenticate and log all users. I want this to be a client-independent setting so that no one needs to do anything on their browsers or machines. I have set my network gateway as the proxy server so that all traffic will be sent to it. I have done this using options in DHCP server. Now I tried using squid as a transparent proxy, but then it won't authenticate in that mode. I tried using iptables to route all traffic to port 3128 but it won't popup the authentication dialog box from SQUID. I tried telling DHCP to give WPAD to all clients by placing a WPAD file on a webserver containing the following for automatic proxy configuration on clients: Changes in dhcpd.conf option wpad code 252 =test; option wpad "\n\000"; option wpad "http://192.168.1.5/wpad.dat\n"; The WPAD file: function FindProxyForURL(url,host) { return "PROXY squid-server-ip-address:3128 ; DIRECT "; } But the browsers (different versions of Firefox and IE) seem to ignore it. :( What should I do ?

    Read the article

  • ping: unknown host google.com

    - by Tar
    Relevant output: /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 servers_ip_address server.2006scape.com server /etc/resolv.conf search 2006scape.com #Generated by NetworkManager nameserver 8.8.8.8 nameserver 8.8.4.4 Some stuff from tcpdump 07:46:28.795843 IP server_ip.42841 > 8.8.4.4.domain: 60253+ PTR? 87.127.104.87.in-addr.arpa. (44) 07:46:28.795980 IP server_ip.54001 > 8.8.4.4.domain: 7390+ PTR? 60.187.80.98.in-addr.arpa. (43) 07:46:28.804029 IP server_ip.59667 > 8.8.4.4.domain: 58876+ PTR? 134.154.161.72.in-addr.arpa. (45) 07:46:28.884171 IP server_ip.46255 > 8.8.4.4.domain: 63027+ PTR? 195.156.251.84.in-addr.arpa. (45) 07:46:28.884217 IP server_ip.35426 > 8.8.4.4.domain: 10538+ PTR? 118.3.182.166.in-addr.arpa. (44) 07:46:28.884253 IP server_ip.53635 > 8.8.4.4.domain: 29928+ PTR? 230.94.81.83.in-addr.arpa. (43) 07:46:28.884286 IP server_ip.45787 > 8.8.4.4.domain: 41151+ PTR? 18.32.223.121.in-addr.arpa. (44) 07:46:28.946045 IP server_ip.47246 > 8.8.4.4.domain: 43103+ PTR? 81.70.251.84.in-addr.arpa. (43) 07:46:28.946066 IP server_ip.33208 > 8.8.4.4.domain: 61117+ PTR? 69.170.184.71.in-addr.arpa. (44) Anyone have any input as to what is causing this?

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • Connection between Asp.Net and Oracle 10g Express Edition

    - by l3gion
    Hello, I'm struggling to find a way to connect my Asp .Net + C# application with my Oracle 10g Express Edition. Here's my scenario, I'm at Mac OS and I have 2 Virtual machines, one for Win 7 (VS 2010 app) and another with a Parallels Virtual Appliance with Oracle 10g Express Edition 1.1. Which provider (Oledb, ODP.NET, etc..) should I use? How to make the connection to the server in C#? Right now I have this: <appSettings> <add key="conn" value="Data Source=10.211.55.11;Persist Security Info=True;User ID=l3gion;Password=l3gion;" /> </appSettings> And at the .cs file: SqlCommand cmd = new SqlCommand("insert_thing", new SqlConnection(ConfigurationManager.AppSettings["conn"])); cmd.CommandType = CommandType.StoredProcedure; *insert_thing is a stored procedure Using this I got this error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've searched for some possible solutions. Tried some, including: firewall disabled, allow remote connection at oracle express edition using this cmd line ("EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);").. The error persists. Can anyone guide me into the right direction? I'm a newbie with this type of things. Thank you for your patience. regards

    Read the article

  • Hearing a clicking noise from soundcard all the time

    - by Mehrdad
    I have installed Fedora 17 on my laptop. A few days ago I updated my fedora (but not upgraded). I shut down my computer and since the next time I turned it on I am hearing a clicking noise all the time from speakers. Even when I plug my headphones in I hear the noise through the headphone. I surfed over the internet and found the following shell commands: su -c 'echo "options snd_hda_intel power_save=0" /etc/modprobe.d/snd_hda_intel.conf' su -c 'echo 0 /sys/module/snd_hda_intel/parameters/power_save' I tried them but they didn't work. Here is the part of "lspci" command related to my sound-card: 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) I have to add that my sound-card is working and I can play some audio file, I mean I can hear the voice and noise simultaneously. But everything is OK in windows xp which is also installed on my laptop. Could it be related to the sound-card driver? If so, how can I revert it to the previous version?

    Read the article

  • How do I do a mail merge that includes images?

    - by Ian Ringrose
    I am trying to find out the practicalities of doing a mail merge when each “record” to be merged on includes some images. I need to: print letters And envelopes Both the letters and the envelopes have: Fixed text Fixed images Text that come from the mail merge record Images that come from the mail merge record I don’t know if all images will be the same size for every record, so a bit of simple “on the fly” automatic formatting may be needed . I need to be able to repeat a single item if I get a problem (e.g when folding the letter). What problems am I likely to have? Is Word 2007 up to this sort of mail merging, or should I be looking at a report writing tool? How do I restart a print run after a printer jam etc? What format should I store the “records” and there images in? E.g Can standard software cope with images that are stored in separate files named after the “CustomerId” that is in the “record” (I can write custom software if needed, but would rather use standard “of-the-shelf” software for the printing, I am planning on custom software for the data creation, so can output in whatever format is needed)

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • Slow tracepath on local LAN

    - by Simone Falcini
    I am on EXSi and I have 2 instances: Ubuntu and CentOS. These are the network configurations Ubuntu eth0 Link encap:Ethernet HWaddr 00:50:56:00:1f:68 inet addr:212.83.153.71 Bcast:212.83.153.71 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76059 errors:0 dropped:26 overruns:0 frame:0 TX packets:7224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6482760 (6.4 MB) TX bytes:2080684 (2.0 MB) eth1 Link encap:Ethernet HWaddr 00:0c:29:46:5a:f2 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:252 errors:0 dropped:0 overruns:0 frame:0 TX packets:608 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42460 (42.4 KB) TX bytes:82474 (82.4 KB) /etc/iptables.conf *nat :PREROUTING ACCEPT [142:12571] :INPUT ACCEPT [5:1076] :OUTPUT ACCEPT [8:496] :POSTROUTING ACCEPT [8:496] -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j MASQUERADE COMMIT *filter :INPUT ACCEPT [2:72] :FORWARD ACCEPT [4:336] :OUTPUT ACCEPT [6:328] -A INPUT -i eth1 -p tcp -j ACCEPT -A INPUT -i eth1 -p udp -j ACCEPT -A INPUT -i eth0 -p tcp --dport ssh -j ACCEPT COMMIT CentOS eth0 Link encap:Ethernet HWaddr 00:0C:29:74:1C:55 inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe74:1c55/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:499 errors:0 dropped:0 overruns:0 frame:0 TX packets:475 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:68326 (66.7 KiB) TX bytes:82641 (80.7 KiB) The main problem is that if i execute this command from the CentOS instance ssh 192.168.1.2 it takes more than 20s to connect. It seems like it's routing the connection to the wrong network. What could it be? Thanks!

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • How to make quicksilver remember custom trigger

    - by corroded
    I am trying to make a custom trigger for my shell/apple script file to run so I can just launch my dev environment at the push of a button. So basically: I have a shell script(and some apple script included) in ~ named start_server.sh which does 3 things: start up solr server start up memcached start up script/server I have a saved quicksilver command(.qs) that opens up start_server.sh(so start_server.sh, then the action is "Run in Terminal") I created a custom trigger that calls this saved qs command. I did that then tested it and it works. I then tried to double check it so I quit quicksilver and when I checked the triggers it just said: "Open (null)" as the action. I set the trigger again and when i restarted QS the same thing happened again. I don't know why but my old custom trigger to open terminal has worked since forever so why doesn't this one work? Here's a screenie of the triggers after I restart QS: http://grab.by/4XWW If you have any other suggestion on how to make a "push button" start for my server then please do so :) Thanks! As an added note, I have already tried the steps on this thread but to no avail: http://groups.google.com/group/blacktree-quicksilver/browse_thread/thread/7b65ecf6625f8989

    Read the article

  • Weird Apache behaviour and with files again

    - by afifio
    Hi and thanks for stopping by. I have read Weird Apache problem with file, I have read Weird Apache problem with file ...and its not the problem Setup single XAMPP installation on Windows, single windows user, 2HD, 1 is a portable USB. All is fine, until I move the xampp to new portable HD Symptom Old php files - works fine, new one doesnt http://127.0.0.1/Ajax/index.php - yay http://127.0.0.1/test2/t.php - display the source code http://127.0.0.1/Ajax/test2/t.php - display the source code http://127.0.0.1/Ajax/t.php - display the source code Extra Info IIS+MS Web Development stuff, .NET4, Asp, etc is being installed and still hast reboot yet. .htaccess also seems doesnt work Apache2 conf file was modified to Averride All and still it doesnt care. One of the directory supposed to treat .htm as php yet got text, created another directory and edit a phpinfo, still another text, browse to phpmyadmin, viola, works fine Suspect Does Apache honour XP security and permission ? If so, this is a single user computer. Does Apache dont like my new hard disk/new place ? Why it doesnt execute the php in new directory but happily execute in old folder ? Thanks for the riddle answers

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • CentOS 6.5 x64 + Ajendi +nginx + php-fpm + mysql setup unable to load the PHP page

    - by Francis
    I'm using Ajendi to setup a PHP web server, I can load the PHP info page, when I try to load the PHP copy from other server, I get a blank page with this log appear on access log, I have no clue what was wrong and troubleshoot further, please advice. x.x.x.x - - [28/May/2014:10:08:37 -0400] "GET /index.php HTTP/1.1" 200 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:29.0) Gecko/20100101 Firefox/29.0" 2.6.32-431.1.2.0.1.el6.x86_64 cat /etc/redhat-release CentOS release 6.5 (Final) The vhost config AUTOMATICALLY GENERATED - DO NO EDIT! server { listen *:80; server_name test.com www.test.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /www; index index.html index.htm index.php; location ~ \.php$ { alias /www; fastcgi_index index.php; include fcgi.conf; fastcgi_pass unix:/var/run/php-fcgi-php-fcgi-0.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • GitHub - commit local changes in local branch to remote branch

    - by user62046
    I use Git Shell in Windows 7, working in a branch named Save-Rotation. Then I used git push origin Save-Rotation to commit the changes to remote. The result is posted at the end. It seems good. But when I went to my repository in GitHub site, which is https://github.com/chiapas/sumatrapdf/tree/Save-Rotation I can't see any change in the repository tree or commit tree. How can I know if the commit (to remote) is successful, and why the repository page is not updated? Here is the result in command-line C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation]> git push origin Save-R otation Counting objects: 167, done. Delta compression using up to 8 threads. Compressing objects: 100% (18/18), done. Writing objects: 100% (119/119), 27.43 KiB, done. Total 119 (delta 101), reused 119 (delta 101) To https://github.com/chiapas/sumatrapdf * [new branch] Save-Rotation -> Save-Rotation C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]> git push o rigin Save-Rotation Everything up-to-date C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]>

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • How to recover my invisible HD again?

    - by pattulus
    I made this several times now, but this time something bad happened. What I did: I installed Windows 7 at a 32GB partition on my slot 2 HD in my MacPro. Windows 7 made a 105MB partition… I knew this before, but what I didn’t know was that this partition is now on my slot 4 HD. My home folder, my private videos and some other stuff are on this 1TB drive. What I found out so far: I’m currently logged in as another admin since my OS partition as well as the two other HD's aren't harmed. Disk Utility: … only shows the 105MB NTSF partition on this 1TB volume. It isn’t showing my old 1TB partition/ex-HD named "storehouse". Only the partition tab is telling me that there now is a 1TB empty free unpartitioned space. Data Rescue II: … is showing the Volume as it used to be with it's old Name "storehouse". A quick scan and a thorough scan both were done in 1 second which leds me to the conclusion that there's isn’t something deleted at all (» hope!). Data Rescue doesn’t even mention the damn "system reserved" partition. Drive Genius: … also shows the old partition and doesn’t mention the new one. But looking at the info it tells me under "content": FDisk_partition_scheme (instead of Apple_partition_scheme). Well D'oh…. Tech Tools: … doesn’t show the volume, otherwise I'd might have been tempted to press rebuild/repair. What to do next?? I think the best approach is to buy another 1TB HD and let Disk Warrior Clone my old one to it… just to be on the safe side. But what is the best thing to do after this… ???

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths?

    Read the article

  • program to track recent saved files or downloads?

    - by DiegoDD
    Back to windows XP times, i used a program named Ava Find that can find any file in the system, but a really nice feature it had, called "scout bot" that automatically finds new files, that is, recently created. More info here: AvaFind For example, if i saved or downloaded a file (in any program), but i cant remember its name or where i put it, i just simply looked at the scoutbot list, and it showed me the most recent files, and then i could just open them from there, or jump to their path. Since i changed to windows 7 (passing through vista), the program no longer works. It simply keeps indexing forever and never finds anything. I don't know if it is not compatible with win 7 itself, or because the fact that my windows 7 is 64 bit. I already tried to contact the Avafind creators, but they never answered, and the program seems to be no longer supported (although the site still works). Now, is there a similar program that finds RECENT files? e.g. listing the most recently saved files, system-wide (or even limiting it to some folders, discs)? And that of course, that is confirmed to work with win 7 64 bit. I know that there are many programs that can find any file, like everything, but what i want is to find recent files and their paths, without knowing their name or e. "everything" also can sort results by date, which is almost what i need, simply to look for part of the file name or extension, and get the most recent one. but if i get to many results, it takes a while, plus, i STILL need to know the name or at least extension of the file. What i want is simply a "Most recently Saved files" tool. i.e., a replacement for Scout Bot in Ava Find. Do you know any alternative?

    Read the article

  • Openvpn - stuck on Connecting

    - by user224277
    I've got a problem with openvpn server... every time when I trying to connect to the VPN , I am getting a window with login and password box, so I typed my login and password (login = Common Name (user1) and password is from a challenge password from the client certificate. Logs : Jun 7 17:03:05 test ovpn-openvpn[5618]: Authenticate/Decrypt packet error: packet HMAC authentication failed Jun 7 17:03:05 test ovpn-openvpn[5618]: TLS Error: incoming packet authentication failed from [AF_INET]80.**.**.***:54179 Client.ovpn : client #dev tap dev tun #proto tcp proto udp remote [Server IP] 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert user1.crt key user1.key <tls-auth> -----BEGIN OpenVPN Static key V1----- d1e0... -----END OpenVPN Static key V1----- </tls-auth> ns-cert-type server cipher AES-256-CBC comp-lzo yes verb 0 mute 20 My openvpn.conf : port 1194 #proto tcp proto udp #dev tap dev tun #dev-node MyTap ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/VPN.crt key /etc/openvpn/keys/VPN.key dh /etc/openvpn/keys/dh2048.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt #push „route 192.168.5.0 255.255.255.0? #push „route 192.168.10.0 255.255.255.0? keepalive 10 120 tls-auth /etc/openvpn/keys/ta.key 0 #cipher BF-CBC # Blowfish #cipher AES-128-CBC # AES #cipher DES-EDE3-CBC # Triple-DES comp-lzo #max-clients 100 #user nobody #group nogroup persist-key persist-tun status openvpn-status.log #log openvpn.log #log-append openvpn.log verb 3 sysctl : net.ipv4.ip_forward=1

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Implications of Multiple JobTracker nodes in a Hadoop cluster?

    - by Jim Dennis
    I get the impression that one can, potentially, have multiple JobTracker nodes configured to share the same set of MR (TaskTracker) nodes. I know that, conventionally, all the nodes in a Hadoop cluster should have the same set of configuration files (conventionally under /etc/hadoop/conf/ --- at least for the Cloudera Distribution of Hadoop (CDH). Can we define multiple Job Trackers in mapred-site.xml? Something like: <configuration> <property> <name>mapred.job.tracker</name> <value>jt01.mydomain.not:8021</value> </property> <property> <name>mapred.job.tracker</name> <value>jt02.mydomain.not:8021</value> </property> ... </configuration> Or is there some other allowed syntax for this? What are the implications of doing this. Does each JobTracker get information about the load on each TaskTracker node. In other words can the two JobTracker co-ordinated their scheduling across the TT nodes only based on the gossip information from the TTs or would they need to talk to one another? Is this documented anywhere?

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >