Search Results

Search found 26555 results on 1063 pages for 'active directory explorer'.

Page 437/1063 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • 403 Forbidden

    - by demas
    Here is my Nginx config: user pass users; worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib64/ruby/gems/1.8/gems/passenger-3.0.7; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name some.another.ru; root /www/public/redmine; passenger_enabled on; rails_env development; } } Here is Nginx log: 2011/06/02 12:53:57 [error] 45986#0: *1 directory index of "/www/public/redmine/" is forbidden, client: **.*.**.***, server: some.another.ru, request: "GET / HTTP/1.1", host: "some.another.ru" 2011/06/02 12:53:59 [error] 45986#0: *1 open() "/www/public/redmine/favicon.ico" failed (2: No such file or directory), client: **.*.**.***, server: some.another.ru, request: "GET /favicon.ico HTTP/1.1", host: "some.another.ru" What is the reason of this error and how can I fix it?

    Read the article

  • How do I get nginx to issue 301 requests to HTTPS location, when SSL handled by a load-balancer?

    - by growse
    I've noticed that there's functionality enabled in nginx by default, whereby a url request without a trailing slash for a directory which exists in the filesystem automatically has a slash added through a 301 redirect. E.g. if the directory css exists within my root, then requesting http://example.com/css will result in a 301 to http://example.com/css/. However, I have another site where the SSL is offloaded by a load-balancer. In this case, when I request https://example.com/css, nginx issues a 301 redirect to http://example.com/css/, despite the fact that the HTTP_X_FORWARDED_PROTO header is set to https by the load balancer. Is this an nginx bug? Or a config setting I've missed somewhere?

    Read the article

  • autofs mac os x afp not loading as correct user?

    - by Stephen Furlani
    Hello, I am way out of my depth, and I am trying to get all of my nodes on a cluster to mount a drive on my head node. I've got /etc/auto_master and /etc/auto_afp configured according to Apple's "Autofs: Automatically Mounting Network File Shares in Mac OS X" White Paper: /etc/auto_master +auto_master # Use directory service /net -hosts -nobrowse,hidefromfinder,nosuid /home auto_home -nobrowse,hidefromfinder /Network/Servers -fstab /- -static /- auto_afp /etc/auto_afp /Volumes/userA -fstype=afp afp://userA:[email protected]:/ /Volumes/userB -fstype=afp afp://userB:[email protected]:/ I am logged into a compute-node as userA. automount appears to mount both /Volumes/userA and /Volumes/userB to head-node.local:/Users/userA/Documents/ even though I have usernames, passwords, and user-directory specified in the afp url. If I go and login with Finder - it mounts userB appropriately. File sharing and cd/dvd sharing is enabled on all computers involved. Am I doing the right thing, and if so, what did I do wrong? -Stephen

    Read the article

  • Contents farms, scrapers sites, aggregators real world examples? [closed]

    - by Marco Demaio
    Contents farm, scrappers, aggregators real world examples? Could you plz clarify me: efreedom.com is a scraper site, not a content farm? Because it simply copies and pastes contents from stackoverflow. ehow.com and squidoo.com are contents farm? They don't copy and paste contents they just generate fresh new user generated content, but too much and too quickly. expert-exchange.com is NOT a content farm or a scraper site, right?! It's simply that many people (an me too) hates it (they also wrote to Matt Cutts) because it shows up hight in Google providing a useless question with no answer. There are also many sites that act as 'contents aggregators in the form of specialized directories' (let's call them CASD), I don't know how to else define them. Do they have a specific definition? Anyway are these type of CASD contents farms or scrapers sites or what else? Basically these CASD search for all sites of the same type i.e. “restaurants websites”, they copy and paste the contents found in “Restaurant A” and create in their aggregator site a new page called “Restaurant A”, then they do the same for all websites of the same type, thus creating a sort of directory of restaurants. Later on these CASD also sends an email to the owner of “Restaurant A” (usually the email is on the website) with a user and password to let him modify/update its own page on the CASD site. Later on these CASD might ask for money to the owner of “Restaurant A” because they bring him traffic, otherwise they remove its page on the aggregator. Someone could call these simply directories, but I think a directory is different because is something you need to add your site into by filling a form and not something that steals contents from your existing site without a specific acceptance from the site's owner. I also really wonder how Google will sort out all these mess sites packed of contents that show up more and more and everywhere in search results.

    Read the article

  • Apache - Include conf Files Relative to ServerConfigFile (-f arg)

    - by Synetech inc.
    Hi, I want to use the -f command-line option for the Apache server so that I can store the conf files in a separate place (a data diectory) from the server binaries. The problem is that I use the Include directive to separate and organize the configurations, but when I use a command like Include "addons/SVN.conf", it fails because Apache looks for addons/SVN.conf in relative to the ServerRoot directory instead of the ServerConfigFile directory. I can work around this by using absolute paths (eg Include "e:\foo\bar\baz\Apache\conf\addons\svn.conf", but I don’t like that since it means I would have to change each and every Include directive if I move the conf folder as opposed to simply changing the -f option. Does anyone know of a way to get the Include directive to work relative to the conf file that Apache is passed. I tried Include "./addons/SVN.conf", but that too was relative to the ServerRoot. This forced relative-to-ServerRoot Include behavior kind of defeats the whole purpose of specifying an alternate config file to the one in ServerRoot/conf. Thanks.

    Read the article

  • Raspberry Pi: mvoe home & var folders to USB HDD?

    - by doni49
    I have a Raspberry Pi running raspian. I've installed Apache2, PHP & MySQL. Apache & MySQL are both configured to use sub-directories of /var. I'd like to use a directory on my USB HDD instead of my SD card. I'd also like to move the /home directory to the USB HDD. I'd like to avoid re-partioning my HDD if possible. I thought maybe I could use a symlink to tell raspian that /var is really at /media/USBHDD1/var and /home is really at /media/USBHDD1/home. I tried it last night but couldn't get it to work.

    Read the article

  • 403 in Response to OPTIONS when updating working copy having full access

    - by user23419
    There is an SVN repository (single repository) http://example.net/svn The repository contains several projects (directories): http://example.net/svn/Project1 http://example.net/svn/Project2 User has full access to Project1 directory and has no access neither to root nor to Project2. Everything works fine for a while: user checks out http://example.net/svn/Project1, commits and updates it successfully. But sometimes trying to update leads to the following error: Command: Update Error: Server sent unexpected return value (403 Forbidden) in response to OPTIONS Error: request for 'http://example.net/svn' Finished! Why does TortoiseSVN request something in the root??? I have noticed that this happens after somebody else committed copy or move operation. Checking out http://example.net/svn/Project1 helps till next time... The main question: How to set up access rights for user to avoid these errors? Note, it's not an option to grant user any read or write access right on the root directory for security reasons.

    Read the article

  • Apache: Virtual Host and .htacess for URL Rewriting not working

    - by parth
    I have configured virtual host in my local machine and every thing working fine . Now I want to use SEO friendly urls. To achive this I have used .htacess file . My virtual host configuration is : <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> and my .htacess file has : AllowOverride All RewriteEngine On RewriteBase /ypp/ RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 The above .htacess setting is not working . After that I have modigied my virtual host setting and it is working . new virtual host setting is : <VirtualHost *:80> RewriteEngine On RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined <Directory "C:/xampp/htdocs/ypp"> AllowOverride All </Directory> </VirtualHost> Please guide me where I am wrong in .htacess file for url rewriting . I donot want to use setting in virtual host because for every change I have restart apache .

    Read the article

  • Everything You Need to Know About Monitoring Oracle GoldenGate

    - by Irem Radzik
    By Joe deBuzna Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Having over 16 years of database replication experience with 6 of those split between complex Oracle GoldenGate installations across three continents and researching monitoring requirements for both GoldenGate core replication and the GoldenGate monitoring GUIs, I've seen GoldenGate used and monitored in every way conceivable. And definite patterns have emerged. Next week at OpenWorld, on Tuesday Oct 2nd at 5pm please come by to Mascone West-3005 for "Everything you need to know about Monitoring Oracle GoldenGate"session to hear me discuss how GoldenGate customers are monitoring their implementations today, common methods and tricks, what's new in the GUIs, and a what's on the roadmap ahead. As you may have seen in previous blog posts and in our launch webcast we have now Plug-in for Oracle Enterprise Manager in addition to the new Oracle GoldenGate Monitor product. For those of you who won't be at OpenWorld, please check out our Management Pack for Oracle GoldenGate data sheet and Oracle GoldenGate 11gR2 New Features white paper to learn more about the new Oracle GoldenGate 11gR2 release. In this latest release we also have enhanced conflict detection and resolution. It is a cornerstone of any Active-Active database replication solution. And in the latest release we just took ours to the next level with built in optimized resolution routines (no more dependency on sqlexec!). At OpenWorld we have a session CON8557 - Best Practice for Conflict Detection & resolution 3:30-4:30 on Wed Oct 3rd at Mascone West- 3005. Oracle Development Manager Bharath Aleti and I will highlight the most commonly used options and best practices gained from our interaction with numerous customers and consultants. Hope you can join us next week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • phpize: m4 error just in one extension

    - by Francois
    On Linux, I installed php 5.3.8 from source. Using phpize for installing an extension works fine but not on one specific extension (mysqlnd). # cd /opt/php/5.3.8/ext/pdo && /opt/php/5.3.8/bin/phpize ... this runs ok # cd /opt/php/5.3.8/ext/mysqlnd && /opt/php/5.3.8/bin/phpize Cannot find config.m4. Make sure that you run '/opt/php/5.3.8/bin/phpize' in the top level source directory of the module` As you can see error can not be that I am not on top level source directory since I am. I tried to call phpize from ext folder - did not work either! For info I have m4 installed Any idea? Thanks :)

    Read the article

  • apache2: Require valid-user for everything except "special_page"

    - by matt wilkie
    With Apache2 how may I require a valid user for every page except these special pages which can be seen by anybody? Thanks in advance for your thoughts. Update in response to comments; here is a working apache2 config: <Directory /var/www/> Options Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> # require authentication for everything not specificly excepted <Location / > AuthType Basic AuthName "whatever" AuthUserFile /etc/apache2/htpasswd Require valid-user AllowOverride all </Location> # allow standard apache icons to be used without auth (e.g. MultiViews) <Location /icons> allow from all Satisfy Any </Location> # anyone can see pages in this tree <Location /special_public_pages> allow from all Satisfy Any </Location>

    Read the article

  • NTFS disk mounted as fuseblk in ubuntu 12.10 is very slow and a lot of errors when rsync. Is that not a rare thing?

    - by Pablo Marin-Garcia
    I am having problems with a NTFS disk mounted as a fuseblk in my ubuntu 12.10 through external usb3. When I did a 1.1TB backup with rsync the speed was 1-2MB/s (wiht a ext4 disk speed was 70 MB/s before and after trying the NTFS disk). Also after one hour errors started to appear: rsync: write failed on "xxx": No such file or directory recv_files: "yyy" is a directory #but this file is a FILE not a dir ??!! .... As this is the first time I have mounted the NTFS in linux for heavy usage (the data would be used in windows afterwards), I would like to know if this kind of thinks are common o was only that something became unstable in my system and a simply restart would probably have solved it. This leads me to the these questions: Can I trust fuse for manage NTFS disks? Or is a problem of the NTFS tools in linux not yet totally stables for writing? Do people is still suffering from low performance with fuse-NTFS vs ext4 (in the past I have read about people complaining about this)?

    Read the article

  • setup the git server in centos6.4 [on hold]

    - by hguser
    We have a server which using centos6.4. Now we want to make this server as the backup and the cvs server. We have ten user in our team. So I created ten accounts accordingly, then they can backup files to their own home directory using ftp. However I do not know how to setup the cvs, we preferred to use git. We want to implement this: Everyone can create git repositories in his home directory with read/write access using his account. Is this possible?

    Read the article

  • Apache denying requests with VirtualHosts

    - by Ross
    This is the error I get in my log: Permission denied: /home/ross/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable My VirtualHost is pretty simple: <VirtualHost 127.0.0.1> ServerName jotter.localhost DocumentRoot /home/ross/www/jotter/public DirectoryIndex index.php index.html <Directory /home/ross/www/jotter/public> AllowOverride all Order allow,deny allow from all </Directory> CustomLog /home/ross/www/jotter/logs/access.log combined ErrorLog /home/ross/www/jotter/logs/error.log LogLevel warn </VirtualHost> Any ideas why this is happening? I can't see why Apache is looking for a .htaccess there and don't know why this should stop the request. Thanks.

    Read the article

  • Has the hardware in my modem gone bad?

    - by Tyler Scott
    I contacted CenturyLink about my modem recently and received useless and unrelated information. The problem seems to be that the modem will no longer save settings, the web interface is unusable except in internet explorer for some reason, and the modem keeps resetting. CenturyLink claimed it had to do with signal strength but I checked and it is currently between good and outstanding according to this. All of the lights remain green even when it starts acting up and I lose internet and shortly before it crashes and reboots. Does anyone have any idea what is going on or what I can do to fix it? (Asking CenturyLink again is obviously not going to help.) Update 1: Accessing the syslog from the web interface causes a crash. After it reboots, the log looks like as follows: 01/01/1970 12:01:29 AM Ethernet Ethernet client connected ,ip(192.168.0.2), mac(1c:6f:65:4c:6d:3b) 01/01/1970 12:01:38 AM Wireless 802.11 client connected ,ip(192.168.0.18), mac(d0:df:c7:c2:73:ca) 01/01/1970 12:01:41 AM System Event Line 0: VDSL2 link up, Bearer 0, us=20128, ds=40127 01/01/1970 12:01:43 AM dhcp6s[2028] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:50 AM dhcp6s[2469] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:52 AM radvd[2306] poll error: Interrupted system call 01/01/1970 12:01:56 AM PPP Link PPP server detected. 01/01/1970 12:01:56 AM PPP Link PPP session established. 01/01/1970 12:01:56 AM PPP Link PPP LCP UP. 01/01/1970 12:01:56 AM System Event Received valid IP address from server. Connection UP. 06/05/2014 08:16:01 AM radvd[2511] poll error: Interrupted system call 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM dhcp6s[3236] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 06/05/2014 08:16:08 AM Wireless 802.11 client connected ,ip(192.168.0.7), mac(44:6d:57:c4:d7:08) I also get it to crash on various other pages. I am guessing the web server is unstable.

    Read the article

  • OS X Mavericks Won't Connect To Ubuntu Server (Netatalk, Avahi)

    - by Andy Ibanez
    I'm really sorry for posting this. I know it may have been asked a thousand times. I have googled like crazy and I'm on the verge of desperation here. Basically, I followed this guide: http://motionsoundfx.com/2012/05/ubuntu-vnc-afp-macosx/ To create a small personal file server. When I installed it, I was able to connect to it just fine, I connected with my Ubuntu username and password and I was able to see the home directory. But later, I had to restart the file server so I could prepare a couple of other hard drives to put in. When the server restarted, I tried to connect to it, but I got an error message on my Mac: "The version of the server you're trying to connect to is not supported. Please contact your system administrator to solve this problem." Again, I have googled like crazy for this, and everybody says it is a problem with OS X Lion and up (assuming it affects Mavericks too). I have tried all the fixes mentioned for Lion and Mountain Lion and I haven't had any luck. That's the reason I'm posting this here: I suspect the problem is with my Ubuntu server. This happened after I restarted the server. Before restarting the server, I just put in my credentials and saw my home directory. Something when I restarted the server must have been messed up. I have found some other solutions, including to use "SHX2" in the conf file, but it hasn't worked for me. I ask for your help to solve this issue. Also please understand I'm completely illiterate when it comes to Linux. This is a nice chance to me to learn the OS so please give me detailed steps to do things if you deem it necessary. Thank you! I'm using Ubuntu Server 13.10 (the latest one as of today).

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • How to change location of Maildir/ in postfix mail system

    - by adesh
    I'm using Postfix with imap and pop servers running on ubuntu linux. I want to change the the location of Maildir folder from user's home directory to some other shared folder so that all mail is in one place. I also have problem sending local mails to users who does not have a home directory (e.g. www-data), the system users created by default. I'm getting the following error. postfix/local[32123]: DCE1D221BBD: to=<www-data@********>, orig_to=<www-data>, relay=local, delay=33, delays=33/0/0/0.12, dsn=5.2.0, status=bounced (maildir delivery failed: create maildir file /var/www/Maildir/tmp/1382169296.P32123.********: Permission denied) I'd like to have a folder structure similar to this: /all-mail/<user-name>/<mail-goes-here>

    Read the article

  • Ant trouble with environment variables on Ubuntu

    - by Inaimathi
    Having some trouble with with ant reading environment variables in Ubuntu 9.1. Specifically, the build tasks my company uses has a token like ${env.CATALINA_HOME] in the main build.xml. I set CATALINA_HOME to the correct value in /etc/environment, ~/.pam_environment and (just to be safe) my .bashrc. I can see the correct value when I run printenv from bash, or when I eval (getenv "CATALINA_HOME") in emacs. Ant refuses to build to the correct directory though; instead I get a folder named ${env.CATALINA_HOME} in the same directory as my build.xml. Any idea what's happening there, and/or how to fix it?

    Read the article

  • Error with FTP since binding via httpcfg

    - by Linda
    I was in a similar posistion to this question and bound two IP addresses using httpcfg. Since doing this ftp does not seem to be working on IIS6 in Windows Server 2003. Any ideas what could be wrong? The command I ran was: httpcfg set iplisten -i xxx.xxx.x.x I get the following when I try to conenct via Filezilla: Error: Connection timed out Error: Failed to retrieve directory listing The log file is returning the following: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-08-17 13:54:05 #Fields: date time c-ip cs-username cs-method cs-uri-stem sc-status sc-win32-status 2009-08-17 13:54:05 91.85.70.17 Client [1]USER Client 331 0 2009-08-17 13:54:05 91.85.70.17 Client [1]PASS - 230 0 In the ftp site settings I have the site pointing to the IP address used using httpcfg and the port set to 21. Update: I can see a directory listing if I connect via the inbuilt commandline ftp client in wondows vista. If I try to connect via a windows explorer I start in the incorrect folder and no files are listed just directories.

    Read the article

  • creating TAGS for Ruby and emacs

    - by hortitude
    I ran the following from my top level Ruby on Rails directory find . -name "*.rb" | etags - Then within emacs I visited that tag file. This works reasonably well to find some of the methods and most of the files, however it is having trouble finding some of the extra methods/classes that I use in my helpers directory. e.g. I have a file in my helpers dir called my_foo_helper.rb If I search my tags for that file, it finds it. However, if I try to find a tag for one of the methods within that module it doesn't find it at all. If I use Aptana or something like that it seems to be able to locate those methods. Any thoughts? Thanks!

    Read the article

  • VMWare-Tools Installation fails on Ubuntu 11.04

    - by Ajay
    I am trying to install VMwareTools-8.4.6-385536.tar.gz (VMWare Tools) on the following operating system: Ubuntu 11.04 - Linux ubuntu 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux I am using VMPlayer version 3.1.4 - build 385536 ==================================================================== After starting the installation i am getting the following errors: What is the directory that contains the init scripts? [/etc/init.d] Error opening No such file or directory ================================================ Distribution provided drivers for Xorg X server are used. Skipping X configuration because X drivers are not included. Creating a new initrd boot image for the kernel. update-initramfs: Generating /boot/initrd.img-2.6.38-8-generic Starting VMware Tools services in the virtual machine: Switching to guest configuration: done Blocking file system: done Guest operating system daemon: failed Virtual Printing daemon: done Unable to start services for VMware Tools Can somebody help in this?

    Read the article

  • Register for a free webcast presented by ISC2: Identity Auditing Techniques for Reducing Operational Risk and Internal Delays

    - by Darin Pendergraft
    Join us tomorrow, June 26 @ 10:00 am PST for Part 1 of a 3 part security series co-presented by ISC2 Part 1 will deal focus on Identity Auditing techniques and will be delivered by Neil Gandhi, Principal Product Manager at Oracle and Brandon Dunlap, Managing Director at Brightfly Register for Part 1: Identity Auditing Techniques for Reducing Operational Risk and Internal Delays ... Part 2 will focus on how mobile device access is changing the performance and workloads of IDM directory systems and will be delivered by Etienne Remillon, Senior Principal Product Manager at Oracle, and Brandon Dunlap, Managing Director at Brightfly Register for Part 2: Optimizing Directory Architecture for Mobile Devices and Applications ... Finally, Part 3 will focus on what you need to do to support native mobile communications and security protocols and will be presented by Sid Mishra, Senior Principal Product Manager at Oracle, and Brandon Dunlap, Managing Director at Brightfly. Register for Part 3: Using New Design Patterns to Improve Mobile Access Control Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • magento on Zend Server (Win7) installation error

    - by czerasz
    I try to install magento for the first time. I've created the database with the name "project" in my C:\Zend\Apache2\conf\httpd.conf I added on the end: <Directory "C:\Zend\Apche2\htdocs\project"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> in my ZendServer/Server Setup/Extensions: PDO_MySQL, simplexml, mcrypt, hash, GD, DOM, iconv, curl, SOAP are on in C:\Zend\ZendServer\etc\php.ini I set: safe_mode = Off ;<-- was set to off ... memory_limit = 512M; Maximum amount of memory a script may consume (128MB) After step "Configuration" of magento installation (with Use Web Server (Apache) Rewrites enabled) I get: Internal Server Error My database is full of tables (that schould be ok) My Zend Server shows: 27-Oct 06:55 6 Severe Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/installDb/ Critical Open 27-Oct 06:55 4 Fatal PHP Error C:\Zend\Apache2\htdocs\project\lib\Varien\Db\Adapter\Pdo\Mysql.php Critical Open 27-Oct 06:55 5 Slow Function Execution curl_exec Warning Open 27-Oct 06:55 5 Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/configPost/ What can be wrong?

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >