Search Results

Search found 56300 results on 2252 pages for 'local working'.

Page 640/2252 | < Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >

  • Wakeup on LAN script works from Mac, but not Windows 7

    - by illyich
    I have a Linux server on my local network that is set up to use wakeup on lan. I copied this script verbatim, just replacing the MAC address in the example use. When I run this script on a Mac, the server wakes up. When I run it from Windows 7 (32-bit Ultimate) it doesn't do anything (note that the script DOES run, I added a debug raw_input() to confirm).

    Read the article

  • MySQL Error: 1067

    - by Vicky
    When I try t start MySQL server service it is giving an error: "Could not start the MySQL server in local computer Error 1067: The process terminated unexpectedly" I need to fix the problem without having to uninstall the MySQL server. Is there a way to do it?

    Read the article

  • Google Chrome on Mac OS X, full screen option is greyed out, why?

    - by Chris Kimpton
    When I first installed Chrome the menu option for "full screen" was enabled and worked, but for the last few months is been greyed out/disabled. Is it just me or perhaps there is a bug which is why its disabled? I tried deleting the installed version and downloading a new copy, to no avail - but perhaps there is a setting in my local preferences that could be disabling it. Thanks in advance, Chris

    Read the article

  • How to tell credentials used for a Network Mapping?

    - by shanecourtrille
    I have a networking mapping that doesn't appear to work. When I connect to the mapping I get access denied when I try to create a folder. When I created the mapping I told it to login as another account. I have verified that account has the proper rights on the server side of things. How can I verify that my local machine is connecting with the right credentials?

    Read the article

  • PHP at the root directory using Ngnix on Linode and Ubuntu 12.04

    - by Steve Kinney
    I originally set up my Linode to use it with the Sinatra applications using Phusion Passenger that I was developing and I have it working great for that. However, as time goes on, I find myself needing just a wee bit of PHP to do a server-side thing here or there. My basic set up was based off of this Linode recipe (I copied and pasted the parts that I needed—I did not install Redis and Node). If I go to http://scholarsnyc.com/index.php everything works great. If I just go the base URL however, I get a 403 Forbidden error (I have a vanilla HTML page there for now). I've played with file permissions and the same file will work if I call it directly. I've done my homework and nothing I try seems to work. I'm sure there is an obvious error. I'm also sure that there are some rookie mistakes in my Nginx configuration (some of those mistakes are the artifacts of trying different fixes from my research. user www-data www-data; worker_processes 1; events { worker_connections 1024; } upstream php { server 127.0.0.1:9001; } http { passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12; passenger_ruby /usr/local/bin/ruby; include mime.types; default_type application/octet-stream; index index.php index.html index.htm; sendfile on; keepalive_timeout 65; server { server_name localhost scholarsnyc.com www.scholarsnyc.com; root /srv/www/scholarsnyc.com/public; location / { index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { server_name data.scholarsnyc.com; root /srv/www/data.scholarsnyc.com/public; passenger_enabled on; } server { server_name tech.scholarsnyc.com; root /srv/www/tech.scholarsnyc.com/public; location / { root /srv/www/tech.scholarsnyc.com/public; index index.php index.html index.htm; } } } Any other optimizations are also appreciated. I literally don't know what to do at this point.

    Read the article

  • Complete Adblock in Google Chrome

    - by James
    I use a MAC and I want to completely block ads on my Google Chrome browse. What I mean by completely is that I don't want the ads to be just hidden. I understand that Google Chrome adblock addons currently hide ads, but can't prevent them from downloading. Is there a work around to this problem? Also, I use Firefox as my primary browser and I am on a proxy server on a Local network.

    Read the article

  • setting apache environment variable

    - by Kiran
    Hi , My hosting environment using Server version: Apache/2.2.14 (Unix) and I am modifying ./usr/local/apache/conf/httpd.conf to set environment variable and restarting the server . SetEnv XML-RPC-IPs 193.45.32.21 I did set it as a first entry in the file and restarted the server . But even restarting if I try to print it is still getting me black , Am I missing any thing ? echo "My IP address ".$_SERVER['XML-RPC-IPs']; Thanks for your help Regards Kiran

    Read the article

  • Sent items listed by sender's name (my name!) not recipient in Outlook for Mac 2011

    - by user568458
    Outlook 2011 on a Mac (OSX7 Lion), on a (mostly PC) office network using Microsoft Exchange server. In Outlook 2011, in the network "Sent Items" folder, all emails are listed showing the sender's name (my name), not the recipients' names. That means every single email is headed with my own name, like this: My Name 08/09/2012 Some subject line [flag] My Name 08/09/2012 Re: Another subject line [flag] My Name 07/09/2012 Re: different subject line [flag] My Name 07/09/2012 different subject line [flag] ...and so on. There's no clue at all as to who these emails were to, until I open each and every one. I guess it's reassuring to be told that every email that I sent was sent by me... and if I was ever to forget my own name while browsing my sent emails folder, it'd be super useful... But it's not very helpful for navigating sent emails. Is this normal for Office for Mac 2011? How can I fix this, so that the list shows the recipients' names instead of endlessly reminding me of my own name? Things I've found while researching this: This can also happen in Outlook for Windows. On Windows, it's easily fixed by resetting the field. That method doesn't work on Mac because the fields can't be selected separately. It seems this can also happen in Apple Mail too, and a lot of people seem stumped by this. I can't find anything on this specific to Outlook 2011 or Outlook for Mac in general. A simple guide on how to fix this would be the best answer, but I'd also welcome any knowledgable thoughts from Microsoft Exchange Server people on whether this sounds like an Outlook settings issue which I can fix on my machine, or some issue relating to how the Mac gets data from the server and network. The fact that Apple Mail users have encountered the same issue with no apparent fix makes me think the problem might be in the network rather than the mail client - but that's way beyond my limited knowledge of these things. I don't know whether the local ("On my computer") Sent Items folder has the same problem, as it's configured so that no emails except drafts are ever stored in these local folders. Drafts saved locally are listed by recipient as expected.

    Read the article

  • How to copy a 200GB file faster?

    - by RainDoctor
    I got a 200GB .tgz file on server A(RHEL 5.2). I wanna transfer that file to server B (RHEL 5.3). Server B is on ESXi 4 Update1. I gave 10GB to that Server B VM, with 4 vCPUs. Both Server A and Server B are connected with an ethernet cable with local IP addies (no switch involved) scp gives me about 3Mbps. Is there a way to get 400Mbps?

    Read the article

  • HTTP caching headers: how should must-revalidate work?

    - by Bobby Jack
    Using trac, I'm getting a response with the following header: Cache-control: must-revalidate Moreover, no 'Expires' header is being sent. Our local proxy, however, is caching these responses, so when an edit is made, pages need to be 'hard refreshed' to update. Is the proxy misbehaving? Other headers that might be relevant: Connection Keep-Alive Proxy-Connection Keep-Alive Keep-Alive timeout=15, max=100

    Read the article

  • Allow email from a particular sender through spam filter

    - by Greg
    We are running exchange 2010 and are using the built in anti-spam feature. We have set up Content Filtering, IP Block List Providers, Sender ID, Sender Reputation and it filters out most of the junk but it also quarantines all emails from one of our customers. It is being quarantined because of the Content Filter agent (Report Below). How can I add an exception for this email address to the Content Filter. I can see how to setup an exception for a delivery address ("Don't filter messages sent TO the following recipients") but I want to add [email protected] to our safe list. I don't want to add the whole domain as it is a very popular ISP in Australia and we often get junk from them. Filter Report: > Diagnostic information for administrators: > > Generating server: something.com > > [email protected] > #550 5.2.1 Content Filter agent quarantined this message ## > > Original message headers: > > Received: from icp-osb-irony-out4.external.iinet.net.au (203.59.1.220) > by server.local.something.com.au (192.5.0.105) with Microsoft SMTP > Server id > 14.1.218.12; Mon, 5 Nov 2012 02:40:40 +1100 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: > AscOALeLllB8qwLw/2dsb2JhbABEKYUFhiigRQOWCwQEgQiBCIIZFAEBTiwCCAIBBwEIFDkBBBoqARoCAQIDAYd4uEuRXGEDiCWFT44UijeDAw > X-IronPort-AV: E=Sophos;i="4.80,710,1344182400"; > d="scan'208,217";a="55137861" Received: from unknown (HELO > asdf83c05c53a3) ([124.171.2.240]) by icp-osb-irony-out4.iinet.net.au > with ESMTP; 04 Nov 2012 23:40:26 +0800 Message-ID: > <E8C866D0299E4BCB8B156723893EB735@asdf83c05c53a3> From: Customer > <[email protected]> To: 'Person' <[email protected]> > Subject: A long sentance Date: Mon, 5 Nov 2011 06:07:57 +1100 > MIME-Version: 1.0 Content-Type: multipart/alternative; > boundary="----=_NextPart_000_0005_01C5F962.3CD09120" X-Priority: 3 > X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express > 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Return-Path: [email protected] Received-SPF: None > (server.local.something.com.au: [email protected] does not > designate permitted sender hosts)

    Read the article

  • Exchange 2003 default permissions for ANONYMOUS LOGON and Everyone

    - by Make it useful Keep it simple
    ANONYMOUS LOGON and Everyone have the following top level permissions in our Exchange 2003 Server: Read Execute Read permissions List contents Read properties List objects Create public folder Create named properties in the information store Are these the "default" settings? In particular, are the "Read" and "Execute" permissions a problem? We have a simple small business setup, Outlook clients connect to the server on the local network, OWA is used from outside the network for browser and smartphone access. Thanks

    Read the article

  • Windows 7 - want full named admin account

    - by soupagain
    I want a full named admin account on Windows 7. The default local administrators group is a limited account. You can enable the hidden full administrator account, but I can't see how to rename it. How can I get a full admin account on Windows 7 in my name. (Yes, I know there are reasons not to do this, but that's not my question ;)

    Read the article

  • small mail daemon for windows

    - by abolotnov
    I'm looking for a small mail daemon for windows that could do a few simple things for me: hang on pop3 (preferably) or imap and check e-mails from me run commands I send via e-mail and return their execution status/output save files I attach to a local folder I suspect there must be something that could do this (I guess outlook can but I don't have it on this box) - I'm perhaps using wrong keywords.

    Read the article

  • How to change suexec root directory from "/var/www" to "/home"?

    - by Oudin
    Hi I've installed suexec using on ubuntu 12.04: apt-get install apache2 apache2-suexec libapache2-mod-fcgid php5-cgi However when I run the following command: sudo /usr/lib/apache2/suexec -V I get the following info: -D AP_DOC_ROOT="/var/www" -D AP_GID_MIN=100 -D AP_HTTPD_USER="www-data" -D AP_LOG_EXEC="/var/log/apache2/suexec.log" -D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin" -D AP_UID_MIN=100 -D AP_USERDIR_SUFFIX="public_html" I'm utilizing "/home/user/public_html" to serve users content on the web not "/var/www" How can I change the root directory to "/home"?

    Read the article

  • Could not evaluate: certificate verify failed while using ssl proxy

    - by Onitlikesonic
    One of our machines was recently put behind an SSL proxy and since then I can't connect to puppet with "Could not evaluate: certificate verify failed." I have checked that the dates match, regenerated the certificates but to no avail. Debugging the verification with "openssl s_client -showcerts -connect puppetmaster:puppetmasterport" shows "Verify return code: 0 (ok)" Initially the Proxy SSL Certificate was not recognized with a "Verify return code: 20 (unable to get local issuer certificate)" problem which was then fixed with the answer in the question: Adding root certificate to CentOS 5

    Read the article

  • Automounting AFP Share For OpenDirectory Users (Mac Lion Server)

    - by davzie
    We're moving all our Mac's from local logins to OpenDirectory logins in an effort to keep everything secure and to also solve issues we have with a Tiger server buggering permissions on an AFP shared drive. To cut it short, I want users to be able to login to their OpenDirectory account and I then want the AFP share point to mount using the credentials they just logged in with. Firstly, is this possible (I assumed it was) and secondly how might I go about doing it? Cheers, Dave

    Read the article

  • Is my Cisco switch port bad?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced last week, however the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link (switchport configuration pastebin). We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

  • Download Sun Studio via CLI

    - by ramesh.mimit
    Can anybody please guide me how to download the sun studio from CLI. I was using wget and lynx programs but not worked. As I have only SSH access to my server and I cant not download it on local machine and upload it on server, will be bad option for me as it will take hours to upload. Sun Studio download requires registration + authentication. I have both but not sure how to include those options while downloading via CLI.

    Read the article

  • Change the mysql.sock that the system PHP uses

    - by Mark P.
    In the past I had installed MySQL as part of XAMPP in /Applications/XAMPP/ and the PHP that the system used (e.g. from command line) used /Applications/XAMPP/xamppfiles/var/mysql/mysql.sock to connect with MySQL. Now I have installed MySQL 5 with macports and I want my system PHP to be able to use that. I think I must set PHP to use /opt/local/var/run/mysql5/mysqld.sock but where do I set this? Is there a php.ini for CLI PHP?

    Read the article

  • What user is the script run as after Carbon Copy Cloner ends scheduled task?

    - by Paolo
    On Mac OS X I keep data on a local server mirrored with the same data on a remote server with a scheduled backup task done with Carbon Copy Cloner. After the backup is done a bash script is run as specified in the scheduled task options of CCC. Is the script run as root? Or differently and more generally: as my script writes to a log file, what command should I put on my script to see on the log if the script is running as root or something else?

    Read the article

< Previous Page | 636 637 638 639 640 641 642 643 644 645 646 647  | Next Page >