Search Results

Search found 36645 results on 1466 pages for 'local content'.

Page 465/1466 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Accessing to shared folders with OpenVPN

    - by Ergec
    This is my first attempt to configure a vpn so I have very little knowledge about this. Network where centos server is having local IPs 192.168.123.* Network where windows machine is having local IPs 192.168.1.* I installed and configured my openpvn server on centos 5 and client on a windows machine. Generated all keys, certificates e.t.c and transferred them to client and I'm able to connect to server. Below there is a screenshot of the client log. Also on server side I can also see incoming packets with this command tcpdump -n port 1723 So I assume I did most of the things correct. But still when I try to open shared folders using \\192.168.123.33 or \\network-name I can't access folders

    Read the article

  • Grep /var/log for hacker/script kiddy activity and e-mail?

    - by Jason
    CentOS 6 Apache Server version: Apache/2.2.15 (Unix) Thinking about how to automatically, once a day, grep all the logs in /var/log/httpd for hacker, phishing, etc activity and e-mail it to myself so I can evaluate what I might need to do. But what are the patterns I can look for? IE, we dont run Wordpress and we see a lot of attempts to access Wordpress related content, obviously for an exploit. Same with PHPMyAdmin. I could do something like repeatedly, matching common patterns we see. # grep -r -i wp-content /var/log/httpd/ # grep -r -i php-my-admin /var/log/httpd/ How do I e-mail myself this the results of each grep command or better yet all Grep results in a single e-mail?

    Read the article

  • PHP hosting some info required [closed]

    - by mtk
    I have recently given a control of newly bought hosting space and the domain account. There is a technical team from the hosting site to help out with problems, but that is a long process, i.e. log a ticket, wait for a long time, and I don't get the correct answer in the first shot. I was wondering, if anyone has any helpful guide and how one must go with hosting a site. Any info that must be know w.r.t to cpanel? Any other useful stuff if any one has, or could point me to ? Just to give a few difficulties: The same php code working well on local machine, giving error on remote as "File not found". The file is present indeed as I have ftp'ed all the files correctly. session_start error are outputted to html page with warning "Header already sent". and many more technical things, that work well on local but not on actual hosting server. So, if anyone has any helpful stuff in this reference, as to what all changes are required or what a programmer must be aware from a hosting perspective, please let me know. Note I am hosting a PHP site with mysql db, on a shared environment.

    Read the article

  • Exchange stops working after changing System Time

    - by L.M
    I am currently in a situation where the system time of my windows machine differs 6 hours from the actual local time. I tried chaning the system time of my windows machine 6 hours back to match the actual local time. The issue is, when the system time is changed, Exchange stops working as it wont start anymore. When i change the time back Exchange works again. Here is the error that it shows when im trying to open the management console after changing the system time. The Follwing error occured while attempting to connect to the specified server "servername". The attempt to connect to http://servername/PowerShell using "Kerberos" authentication failed: Connecting to remote server failed with the following error message: Access is denied. For more information, see the about_remote_troubleshooting Help topic. Any Solutions to this problem?

    Read the article

  • How to forbid postfix to send to external domains [closed]

    - by elhoim
    I have a local postfix server, and i want it to only relay emails to the only local domain (localdomain.be): myhostname = localdomain.be mydomain = localdomain.be alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $myhostname mydestination = $myhostname relay_domains = $mydomain default_transport = smtp relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/24 mailbox_size_limit = 64000000 message_size_limit = 1000000 recipient_delimiter = + inet_interfaces = all inet_protocols = all smtp_host_lookup = native This configuration works fine to allow relay mail locally and on external destination domains, but i would like it to be an impossibility to send to other domains (ie: gmail.com). relay_domains is supposed to ensure that but it does not seem to really filter since i can still send to my gmail address.

    Read the article

  • How does hadoop decide what its nodes hostnames are?

    - by Dan R
    Currently the urls generated by the jobtracker & namenode return either hostnames like bubbles.local or just bubbles. These end up not resolving unless the client machine has specified these in their /etc/hosts file. When I run the hostname command on these machines it returns a hostname complete with the domain (E.G bubbles.example.com) Running a small java test on these machines InetAddress addr = InetAddress.getLocalHost(); byte[] ipAddr = addr.getAddress(); String hostname = addr.getHostName(); System.out.println(hostname); Produces output just like the hostname command. Where else could hadoop be grabbing a hostname to use in its jobtracker / namenode UI? This is occurring in clusters with Hadoop 1.0.3 and 1.0.4-SNAPSHOT from early august. The machines are running CentOS release 5.8 (Final). The generated URLs I'm referring to are like this http://example:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/ or http://example.local:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/

    Read the article

  • need a different backup solution

    - by DigitalJedi
    I just built a new media/backup server using Ubuntu 12.04 64bit. I installed a hard drive to be used only for music, pictures, and videos and formatted it fat32 so my 1 and only Windows PC could map those folders as netshares. My laptop, also running Ubuntu 12.04, is what I am using the most so new media is first downloaded on my laptop. I've already got the music, videos, and pictures folders from my server mounting as shares on my laptop on boot thanks to some fstab edits and sshfs. Now I'm wanting either an app or script that could backup any new files I add to my local media folders to the mounted folders on my server. I've been Googling all day and found a few apps like rsync but they seem to have issues with ext4 to vfat backups. I thought maybe a script would be best but I'm new to scripting in Linux and don't want to mess anything up. Basically I am looking for something that will backup only newly added files to the server. I figure I could schedule it once a week. There are some stipulations. For example, my local music folder has over 700 folders for each artist/band then sub folders inside those for albums. I want something smart enough to only copy newly added content so I'm guessing the modified date would probably be a good condition if I were scripting. I'm rambling. Any suggestions would be GREATLY appreciated. I'm not finding anything to suit my needs. I'm almost to the point of just learning bas scripting so I can write something but then it will be a couple weeks or so before I have a possible solution and I'd like something in place sooner.

    Read the article

  • Default profile for large

    - by user63434
    Hi I am setting up a master image to clone to all same machine type Windows 7 client, I login as administrastor and installed all the programs and changed the desktop settings etc, but my local administrator profile is 244megs in size, which will become the default profile of the local machine when sysprep, we have a 2003 server that I want to use mandatory profile for all login users which means I need to copy this profile to the server so when any users login to the domain they are using this profile, loading a 244megs profile is going to be very slow since it will be removed from the client when they logoff. So next time they login it will take a long time again. Is there anything I can do, can I just copy just the bare minimum files from the default profile to the server, as I am not sure what parts I need, I read that I must copy my documents, my documents/pictures so the folder redirection will work. What else do I need to copy to the server? I have firefox xmark sync also and MS words etc. THanks

    Read the article

  • Nginx config - serving index.html not working

    - by Bill
    I can't figure out how to redirect / to index.html. I've gone through the threads on serverfault and I think I've tried every suggestion including: rewrite statements within location / index index.html at the server level, within location / and within static content moving node.js proxy statements to location ~ /i instead of within location / Obviously something is wrong somewhere else in my configuration. Here is my nginx.conf: worker_processes 1; pid /home/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; error_log /home/logs/error.log; access_log /home/logs/access.log combined; include sites-enabled/*; } and my server config located in sites-enabled server { root /home/www/public; listen 80; server_name localhost; # proxy request to node location / { index index.html index.htm; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:3010; proxy_redirect off; break; } # static content location ~ \.(?:ico|jpe?g|jpeg|gif|css|png|js|swf|xml|woff|eot|svg|ttf|html)$ { access_log off; add_header Pragma public; add_header Cache-Control public; expires 30d; } gzip on; gzip_vary on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_min_length 1000; gzip_disable "msie6"; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; } Everything else is working just fine. Requests get proxied to node correctly and static content is served correctly. I just need to be able to forward requests made to / to /index.html.

    Read the article

  • Apache routing vhosts to /var/www

    - by FHannes
    One user at my site has reported that he reaches the content at /var/www when browsing to any of the vhosts at my server. As far as I’m aware, my Apache server does not contain a document root that references this folder. On top of that, this user seems to be the only one experiencing the issue. According to his ISP, the issue isn’t caused by them, yet, on his mobile connection, he can access the site. When browsing to my server’s IP, he also receives the correct content from the default vhost. What could be the possible causes of this issue and how can I get it to stop? I’ve explored pretty much every option I could think of.

    Read the article

  • Video streaming over multi display units

    - by ramdaz
    We have to share video across around 4/8 terminals at a public facility where we need to display live video from within the facility, as well as display messages(advertisements), and also play videos(not live) which need to be controlled centrally from another location. We can do central location handling over Internet, over ssh. What we want to do is connect cameras to a computer, and use the computer to display over multiple display units. We need to do live titling if possible. Once the live local telecast which usually takes about an hour or two a day, we would like to play other videos locally off the PC server. Preferably everything should run off Linux, since budgets are very constrained.... Addendum -- Its not over WAN, it's over a local area. I prefer not using LAN, we would rather use co-axial cable if possible. The reason is if it's LAN, I need some kind of an Networking device, at least a thin client

    Read the article

  • Installing multiple versions of a shared library

    - by nsfyn55
    I am running ubuntu 10.04 and I want to use tmux 1.6. tmux has a dependency on libevent 2. My solution was to compile libevent2 and drop into /usr/local/lib then compile tmux against this lib and drop into /usr/local/bin. This works great until...I restart. This is just an assumption on my part but it seems that other binaries are now linking to the libevent2 library presumably because its on the library path. Because there are 60+ packages with libevent1 dependencies this causes my install to basically lose its mind. Is there an idiomatic way to approach running an application that has a core library dependency on a different version? Should I just statically link the lib?

    Read the article

  • Apache Bench length failures

    - by Laurens
    I am running Apache Bench against a Ruby on Rails XML-RPC web service that is running on Passenger via mod_passenger. All is fine when I run 1000 requests without concurrency. Bench indicates that all requests successfully complete with no failures. When I run Bench again with a concurrency level of 2, however, requests start to fail due to content length. I am seeing failures rates of 70-80% when using concurrency. This should not happen. The requests I am sending to the web service should always results in the same response. I have used cURL to verify that this is in fact the case. My Rails log is not showing any errors as well so I am curious to see what content Bench actually received and interpreted as a failure. Is there any way to print these failures?

    Read the article

  • Error 1069 the service did not start due to a logon failure

    - by Si.
    Our CruiseControl.NET service on Win2003 Server (VMWare Virtual) was recently changed from a service account to a user account to allow for a new part of our build process to work. The new user has "Log on as a service" rights, verified by checking Local Security Settings - Local Policies - User Rights Assignment, and the user password is set to never expire. The problem I'm facing is every time the service is restarted, I get the 1069 error as described in this questions subject. I have to go into the properties of the service (log on tab) and re-enter the password, even though it hasn't changed, and the user already has the appropriate rights. Once I enter the password apply the changes, a prompt appears telling me that the user has been granted log on as a service rights. The service will then start will no problems. Not a show stopper, but a pain none-the-less. Why isn't the password persisting with the service?

    Read the article

  • Excel file has disappeared from sharepoint document library

    - by user40389
    Two days ago an excel came up missing from a document library. This document library only had this file and nows it's gone. When I go to All Site Content-Document Librarys it shows that there is still one file in the library. Seems like there is something screwy. Is there anything I can do to get this item to reappear? MOSS2007 Recycle: Must be default never changed anything and really don't know how to find the settings for this Document Library Settings Versioning Settings: content approval: no document version history: create major versions require check out: yes

    Read the article

  • UCS Documentation: Home1 or Home2? You Decide.

    - by joesciallo
    How you go about finding information can be a very personal affair. We each have our own style of locating content, we each have our particular bent. But when it comes to getting started finding technical information, sometimes the simplest ways are best, especially if you are relatively new to a product or technology, and you just need to get going quickly with that Installation Guide or those Release Notes. With that in mind, I recently created an alternative home page for the Unified Communications Suite documentation. You can now have your Home2, in addition to the original, more wiki-fied Home1. I would recommend Home2 for those who are used to, and are more comfortable with, the spreadsheet-like view of manuals: here you'll find those familiar titles and the option to either view on the wiki or download the equivalent PDF file. Once you get familiar with what guides are available for the UCS component products, then move on to the Home1 layout, and start using some of the more advanced techniques for finding content in a wiki. Either way, you should be able to locate the "thing" you are looking for.

    Read the article

  • Cannot connect to domain despite successful pings

    - by egtann
    Pings to my domain name work, but I can't connect via http. I've been trying various methods for a week now, but haven't come up with anything that worked. Any idea what's causing this? /etc/apache2/httpd.conf ServerName machinename.local <VirtualHost *:80> ServerName chipperapp.com DocumentRoot "/Users/myusername/appname/public" <Directory "/Users/myusername/appname/public"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> /etc/hosts 127.0.0.1 chipperapp.com I can access the app from my local machine, but not on any other. I've set up dynamic DNS. Thanks!

    Read the article

  • Mirrored servers in data centers nationwide -- how?

    - by Sysadmin Evstar
    Mirrored servers in data centers nationwide -- how? I flunked my IT interview by getting this question wrong. I thought that in the various metropolitan areas, an "http://google.com" request goes to the ISP's DNS server, which somehow returns an IP address for one of several geographically-nearby http servers, and then something internally rolls over to the next available local Google server. But then, I could not explain where the table of available local Google servers is actually cached, or the details of the IP address rollover. Or how they could manually take some server out of the rotation, from anywhere. So, what should I be reading now so I can ace this question next time? Also, what daemons run on these machines 24/7 to keep all those mirrored database disks synchronized?

    Read the article

  • Adding custom script on ESXi 5.0

    - by Quzar
    I have an ESXi server that I would like to have run a custom script on every boot that contains esxcli and other commands. I have tried adding the script into init.d and creating an rc.local.d folder with a script, but the etc folder gets rebuilt on startup. I've also tried modifying state.tgz and local.tgz in the /bootbank folder in order to force these files to appear, but that does not seem to work either. Is there any way I can run custom commands on boot? Note: I've tried the advice here ESXi boot process / state storage to no avail. Seems the system was changed between 4.1 and 5.0

    Read the article

  • Building MySQL with boost on windows

    - by user13177919
    As you've probably heard already MySQL needs boost to build. However, in the good ol' MySQL tradition, the above link does give you only the instructions on how to build it on linux. And completely ignores the fact that there're other OSes too that people develop on. To fill in that gap, I've compiled a small step by step guide on how to do it on windows. Note that I always, as a principle, build out-of-source. The typical setup I have is : bzr clone lp:~mysql/mysql-server/5.7 mysql-trunkcd mysql-trunkmkdir bldcd bldcmake -DWITH_DEBUG=1 -DMYSQL_PROJECT_NAME=mysql-trunk ..devenv /build debug mysql-trunk.sln This has been tested to work on a 32 bit compile using VS2013 on a Windows7 64 bit build. Note that you'll need other things too (bison, eventually openssl etc) that I will assume you already have set up. Steps: Download Boost 1.55.0. It's the *only* version that is known to work currently. Extract boost_1_55_0/ from the zip to c:\boost\boost_1_55_0 Go to Control Panel/System/Environment variables and set WITH_BOOST=C:\boost\boost_1_55_0 in User variables. Make sure you restart your open command line terminal windows after this !  If you're upgrading from non-boost build, remove your bld/ directory and create a new one. run cmake as you'd typically do. You should get: -- Local boost dir C:/boost/boost_1_55_0 -- Local boost zip LOCAL_BOOST_ZIP-NOTFOUND -- BOOST_VERSION_NUMBER is #define BOOST_VERSION 105500 -- BOOST_INCLUDE_DIR C:/boost/boost_1_55_0 Build as normal (devenv /build debug ...). It should work.

    Read the article

  • What does the red x icon mean next to a user in folder permissions (Windows 7)

    - by Scott Szretter
    In trying to debug various strange issues on a machine, I found something strange - when I go to C:\Users\administrator and get properties, security tab, it lists the users (the local admin account, system, and 'administrator' which is the domain administrator account). It all looks fine in terms of permissions (full control, etc.) compared to other machines. The one difference is there is a small red circle with an X to the left of the user icon/name. Additionally, there are various folders where it says access denied under there - for example, my documents! Even logged in as the local machine administrator account (which is not named administrator), I am unable to change the permissions - it says access denied. Any ideas what this means and how to fix it? I even tried re-joining the machine to the domain.

    Read the article

  • Install gettext-0.17-1.el5.i386.rpm on CentOS 5.8 i386 gives "/usr/bin/python is needed" error

    - by viji
    I've removed yum by mistake. So, now I'm installing all the dependencies needed by yum manually. One of those is gettext-0.17-1.el5.i386.rpm and when I try to install it gives the following error, error: Failed dependencies: /usr/bin/python is needed by gettext-0.17-1.el5.i386 which is weird since I've installed python 2.6 already in the system. #python -V Python 2.6.8 #which python /usr/local/bin/python So I copied /usr/local/bin/python to /usr/bin/python and even after that I'm getting the same error. Any help is appreciated.

    Read the article

  • How can I log the response header and body in apache?

    - by acme
    I need to determine whether the server (Apache 2) is returning the full contents of a page along with its correct header or not. I have a PHP-script that is executed successfully, but the browser is getting only half of the html content, it's simply cut off. The client infrastructure is pretty complicated, using Novell BorderManager Proxys and stuff. To ensure the server is doing its job fine I want to log both header and body of the reponse. How can I achieve this? I looked into the mod_log_config module of apache (which is already installed and ready to be used), but honestly I didn't quite manage to configure it to output header and body somewhere. edit: I managed to log the header with LogFormat "%h %l %u %t \"%r\" %s %b \"%{HEADER_NAME}o\"" common2 CustomLog /var/log/apache2/response.log common2 But unfortunately the mod_log_config formats don't support the whole content body. Can anyone help?

    Read the article

  • How to post JSON object to a URL without cURL in PHP? [closed]

    - by empyreanphoenix
    I have written a send sms code which uses an available sms api. So, I need to post a string in the json(string in $json) format to a url(url specified in $url var), but I am getting the following error String index out of range: -1 I understand that the error arises when requesting for an index that isn't there, but I dont find how that applies in this case. Please help. Note: name1,name2 and name3 are sender name, phone number and content respectively. Thanks array ( 'method' = 'POST', 'content' = $data, 'header' ="Authentication:$key" . "timeout:50000"."ContentLength: " . strlen ( $data) . "\r\n" ) ); if ($optional_headers != null) { $params ['http'] ['header'] = $optional_headers; } $ctx = stream_context_create ( $params ); try { $fp = fopen ( $url, 'rb', false, $ctx ); $response = stream_get_contents ( $fp ); echo "response"; } catch ( Exception $e ) { echo 'Exception: ' . $e-getMessage (); echo "caught"; } echo "done"; return $response; }

    Read the article

  • Make Chrome always open PDFs itself

    - by jdm
    Hi, I'm looking for a way to make Google Chrome always open PDFs with its internal viewer when I click a link, as opposed to downloading it to the default location. It works with most URLs, but some servers set a special header to force the file to be downloaded ("Content-Disposition: attachment;", e.g. http://www.uni-goettingen.de/en/46260.html). What I want is the opposite of this question: Stop PDFs from displaying inside Google Chrome, or what is asked for here, but applied to Chrome: How to ignore “Content-Disposition: attachment” in Firefox Btw., I'm running Chrome 8.0.552.0 dev on Ubuntu 10.4.

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >