Search Results

Search found 97876 results on 3916 pages for 'user folder'.

Page 155/3916 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • "powershell has stopped working" on PS exit after creating IIS User

    - by Nick Jones
    On a Windows Server 2012 box, I'm using PS to create a new IIS User (for automated deployments using MSDeploy). The command itself appears to work fine -- the user is created -- but as soon as I exit my PowerShell session (typing exit or just closing the command window), a dialog is displayed stating "powershell has stopped working", with the following details: Problem signature: Problem Event Name: PowerShell NameOfExe: powershell.exe FileVersionOfSystemManagementAutomation: 6.2.9200.16628 InnermostExceptionType: Runtime.InteropServices.InvalidComObject OutermostExceptionType: Runtime.InteropServices.InvalidComObject DeepestPowerShellFrame: unknown DeepestFrame: System.StubHelpers.StubHelpers.GetCOMIPFromRCW ThreadName: unknown OS Version: 6.2.9200.2.0.0.400.8 Locale ID: 1033 The PS commands in question are: [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Management") [Microsoft.Web.Management.Server.ManagementAuthentication]::CreateUser("Foo", "Bar") Why is this happening and how can I avoid it? EDIT: I've also confirmed this be a problem with PowerShell 4.0, so I've added that tag. I've also submitted it on Connect. UPDATE: It appears that Windows Server 2012 R2 does not have this same bug.

    Read the article

  • Hosting a subversion working copy in an remote WebDAV folder

    - by Daniel Baulig
    This might be a bit awkward, but I'll try to explain what I am trying to achieve and what problems I encountered. First of all: whats this about? I am currently trying to set up a distributed working enviroment for developing a web page. My plan was to setup a SVN repository for version control, a live server where the actual live page ist hosted and a development server where I can work on the page. To ease things I intended to not have a local copy of the project on my disk, but to actually work directy on the files, that the development server hosts. For that I setup a WebDAV directory, under devserver.com/workspace, that actually mapped to files served under devserver.com/. So I could connect to devserver.com/workspace, change something and view the results live at devserver.com/. So far this worked perfectly. The next step was to create a SVN repository that would take care of my version control. I intended to be able to checkin to the reposiroty from my development server and at any time, with a small shell script, deploy any revision from the svn to the live server by checking out a copy of the revision into the live server directories. The second part, checking out into the live server, also worked perfectly. The first part though is where problems arose: My workstation is a Windows 7 machine. I connected to the WebDAV share using Windows built-in WebDAV support, which worked quite well. I can create, move, delete, edit, whatever files on my WebDAV share from my Windows machine perfectly. The next step was to checkout a working copy from the SVN (actually hosted at devserver.com/subversion/) into the WebDAV share. In the first try I used the Eclipse plugin subversive. The actual checkout worked fine and I can update and commit stuff to the repository, however, I cannot add any files to the ignore list. It always brings me an error. So I tried the same thing with a complete fresh repository using TortoiseSVN - and again it failed with the same errors. Here is what it says when trying to add files to svnignore: Some of selected resources were not added to ignore. svn: Cannot rename file '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props.66fd8936-2701-0010-bb76-472f0b56a5d1.tmp' to '\\devserver.com@SSL\DavWWWRoot\workspace\.svn\tmp\dir-props' This is what apache2 tells me, when I try to add a file to svnignore: [Sun Mar 07 03:54:19 2010] [error] [client xxx.xxx.xxx.xxx] Negotiation: discovered file(s) matching request: /var/www/devserver.com/.svn/tmp/dir-props (None could be negotiated). [Sun Mar 07 03:54:31 2010] [error] [client xxx.xxx.xxx.xxx] (20)Not a directory: The URL contains extraneous path components. The resource could not be identified. [400, #0] Actually both messages are repeated several times. The first one occurs first and is repeated about 5 times and the second comes there after and is repeated propably more than 20 times. If I create a regular file, delete, rename or modify it none of those messages appear in my error.log While writing this question now I was able to add fils to svnignore using TortoiseSVN. However, after that, Eclipse would not let me commit anymore. The error that used to pop up when adding files to svnignore now also shows up while commiting. While searching the web I found some people having this same message appearing because they had files only different in upper- / lower-case naming. I checked my repository and did not find such files. I also read somewhere about people having troubles with WebDAV and file locking, because WebDAV's file locking capabilities seem to be very limited. At some stage I got errors telling me my repository was locked and thus the operations could not be completed. This error though did not appear anymore, since I setup a completely fresh repository and working copy. I would really appreciate any help anyone can provide me in fixing this problem! If there are any more questions feel free to ask. I know this is a somewhat unusual setup. Best regards, Daniel

    Read the article

  • Preventing access to files if a user types the full url on the address bar

    - by bogha
    i have a website, some folders on the websites contains images and files like .pdf , .doc and .docx . the user can easly just type the address in the url to get the file or display the photo http://site/folder1/img/pic1.jpg then boom.. he can see the image or just download the file my question is: how to prevent this kind of action, how can i guarantee a secure access of the files. any suggestions UPDATE TO CLARIFY MY IDEA i don't want any user who is browsing the website to get access to these files normally by just writing the URL of the file. those files are a CV files, they are being uploaded by the users to a specific folder on the server which we host outside the company. those files are only being viewed by the HR people through a special system. that's the scenario we want. i don't want a WEB GEEK who just wants to see what files has been uploaded to this folder to download them easly to his/her computer and view them or publish them on the internet. i hope you got my idea

    Read the article

  • Trying to use Digest Authentication for Folder Protection

    - by Jon Hazlett
    StackOverflow users suggested I try my question here. I'm using Server 2008 EE and IIS 7. I've got a site that I've migrated over from XP Pro using IIS 5. On the old system, I was using IIS Password to use simple .htaccess files to control a couple of folders that I didn't want to be publicly viewable. Now that I'm running a full-blown DC with a more powerful version of IIS, I decided it'd be a good idea to start using something slightly more sophisticated. After doing my research and trying to keep things as cheap as possible with a touch of extra security, I decided that Digest Authentication would be the best way to go. My issue is this: With Anon access disabled and Digest enabled, I am never prompted for credentials. when on the server, viewing domain[dot]com/example will simply show my 401.htm page without prompting me for credentials. when on a different network/computer, viewing domain[dot]com/example again shows my 401.htm without prompting for credentials. At the site level I only have Anon enabled. Every subfolder, unless I want it protected, has just Anon enabled. Only the folders I want protected have Anon disabled and Digest enabled. I have tried editing the bindings to see if that would spark any kind of change... www.domain.com, domain.com, and localhost have all been tried. There was never a change in behavior at any permutation (aside from the page not being found when I un-bound localhost to the site). I might have screwed up when I deleted the default site from IIS. I didn't think I'd actually need it for anything, but some of what I have read online is telling me otherwise now. As for Digest settings, I have it pointed to local.domain.com, which is the name assigned to my AD Domain. I'm guessing that's right, but honestly have no clue about what a realm actually is. Would it matter that I have an A record for local.domain.com pointing to my IP address? I had problems initially with an absolute link for 401.htm pages, but have since resolved that. Instead of D:\HTTP\401.htm I've used /401.htm and all is well. I used to get error 500's because it couldn't find the custom 401.htm file, but now it loads just fine. As for some data, I was getting entries like this from access logs: 2009-07-10 17:34:12 10.0.0.10 GET /example/ - 80 - [workip] Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+InfoPath.2) 401 2 5 132 But after correcting my 401.htm links now get logs like this: 2009-07-10 18:56:25 10.0.0.10 GET /example - 80 - [workip] Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.0.11)+Gecko/2009060215+Firefox/3.0.11 200 0 0 146 I don't know if that means anything or not. I still don't get any credential challenges, regardless of where I try to sign in from ( my workstation, my server, my cellphone even ). The only thing that's seemed to work is viewing localhost and I donno what could be preventing authentication from finding it's way out of the server. Thanks for any help! Jon

    Read the article

  • Restrict VPN user to Remote Desktop only with Sonicwall

    - by Matt
    Basically I want him to only be able to log onto the VPN in order to use Remote Desktop to use HIS machine. Not surf the internet or do anything like that, but just use the programs on his machine that he doesn't have at home. We use a Sonicwall NSA 220 with their regular VPN client. I can create a user for him, but when I create an access rule it applies to all VPN users. How can I make something like that only apply to ONE user?

    Read the article

  • Backup Permissions for an Active Directory Profile Directory

    - by Earls
    I have Folder Redirection turned on so the profiles are on a Windows shared folder on a File and Print Server... \folders\Profiles I want to back up the entire Profiles directory, but as Domain Admin I don't seem to have the privileges to "select all and copy" the entire directory structure. The user profile subfolders (Appdata, Documents, Desktop, Pictures, etc.) throw access denied errors... I tried to grant Domain Admins full privileges to the Profiles directory and thought the subfolders would inherit the privileges, but I get access denied errors just trying to set the permissions... How can I assign a user to the Profiles directory so that I can copy the entire directory tree to back it up?

    Read the article

  • RSync over SSH - permission denied even though the user is in the root group

    - by Bastien974
    I have a need to copy files between servers through the web. I'm using RSYNC over ssh to do so. The problem is, I need to be able to transfer files, no matter where the files is. I created a user rsync and : usermod -G root -a rsync to give him the right to read/write anywhere on both servers. During the transfer, I see this error: rsync: mkstemp "/root/.myFile.RDr2HY" failed: Permission denied (13) I don't understand what's happening. edit: I just found out that the destination folder didn't have the write access for the root group. How would I give 100% access to this rsync user ? If I change its uid to 0, rsync stop working.

    Read the article

  • Copying files locally on a network folder

    - by Altay_H
    I noticed that copying a large file from one location on a network drive to another location on the same network drive takes much longer than copying it locally. Instead of copying the file locally, the network computer sends the file to my remote computer, which sends it back to the same network computer. This means the files are being transferred over the network completely unnecessarily. Is there a way to fix this issue? It's becoming a real hassle to manage the video files on my network drive. Note: This is the case with both Windows and Linux (using Samba) network folders.

    Read the article

  • sendmail on Ubuntu won't send from www-data user

    - by bumperbox
    I if call mail() function in PHP from webserver (running as www-data) i get an error sending email. If i call the same script from the cmdline logged in as root, then it works If i switch user to www-data and run from the cmdline i get this error message WARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) can not chdir(/var/spool/mqueue-client/): Permission denied Program mode requires special privileges, e.g., root or TrustedUser. FAILEDWARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) can not chdir(/var/spool/mqueue-client/): Permission denied Program mode requires special privileges, e.g., root or TrustedUser. FAILEDTest Complete$ WARNING: RunAsUser for MSP ignored, check group ids (egid=33, want=107) I am guessing i need to do something in sendmail configuration I have googled for some solutions but have ended up more confused. Can someone let me know what configuration I need to change to fix so i can send from www-data user?

    Read the article

  • Unable to logoff, disconnect, or reset terminal server user in production environment

    - by l0c0b0x
    I'm looking for some ideas on how to disconnect, logoff, or reset a user's session in a 2008 Terminal Server (unable to login as the user either as it is completely locked-up). This is a production environment, so rebooting the server or doing something system-wide is out of the question for now. Any Powershell tricks to help us with this? We've tried killing the session's processes too, directly from the same terminal server (from the task manager, Terminal Services Manager and the Resource Monitor) with no results. Help!

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • Can I extract a specific folder using tar to another folder?

    - by PeanutsMonkey
    I am new to the world of Linux and seem to have run into a stumbling block. I know I can extract a specifc archive using the command tar xvfz archivename.tar.gz sampledir/ however how can I extract sampledir/ to testdir/ rather than the path that the archive is in e.g.currently the archive is in the path /tmp/archivename.tar.gz and I would like to extract sampledir to testdir which is in the path /tmp/testdir.

    Read the article

  • Create Webmin user for an EC2 Instance

    - by Dean
    I've setup an Amazon EC2 Instance, using the Ubuntu 12.04 AMI (ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20120424 (ami-a29943cb)), and I'd like to get Webmin working (so I can setup a DNS). After following the installation instructions on Webmin's site, the installer says I can login with any username/pass of a user who has superuser access. The problem is that the EC2 instance only has 1 user "ubuntu", which can only login using SSH keys -- not a password! I've tried creating users manually and I can't login as those users (even via SSH), so I think it might be a permission thing provided by the AMI. Does anyone know the best way around setting up a login to my webmin?

    Read the article

  • Is it Secure to Grant Apachie User Ownership of Directories & Files for Wordpress

    - by Oudin
    I'm currently setting up WordPress on an Ubuntu server 12 everything runs fine but there is an issue when it comes to automatically updating and uploading media via WP as Apache "www-data" user does not have permissions to write to the directories. "user1" has full permission All my directories have permissions of 0755 and files 644 my directories setup is as follows: /home/user1/public_html All WP files and directories are in "public_html" In order to work around the auto updating and uploading media I've granted Apache user ownership to the following directories sudo chown www-data:www-data wp-content -R sudo chown www-data:www-data wp-includes -R sudo chown www-data:www-data wp-admin -R I would like to know security wise how secure this is and if it is not secure what would be the best solution? That will allow me to keep all files and directories owned by user1 and still allow wp to be able to automatically update and uploading media

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Excluding executables in winsxs folder

    - by Yasser Sobhdel
    I am facing a big problem since there are many msbuild.exe files in .Netframework and winsxs folders and I don't know if it is required to set all of them as trusted in the antivirus. The path changes server to server and hence it makes life harder! If it matters to you, I am using Kaspersky endpoint security 10. When defining trusted application, as far as I know, you must point exactly to the executable with hard-coded path. Thanks

    Read the article

  • Nginx Server Block Port 8081 Path to Root Folder

    - by Pamela
    I'm trying to password protect all of port 8081 on my Nginx server. The only thing this port is used for is PhpMyAdmin. When I navigate to https://www.example.com:8081, I successfully get the default Nginx welcome page. However, when I try navigating to the PhpMyAdmin directory, https://www.example.com:8081/phpmyadmin, I get a "404 Not Found" page. Permission for my htpasswd file is set to 644. Here is the code for my server block: server { listen 8081; server_name example.com www.example.com; root /usr/share/phpmyadmin; auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } I have also tried entirely commenting out #root /usr/share/phpmyadmin; However, it doesn't make any difference. Is my problem confined to using the incorrect root path? If so, how can I find the root path for PhpMyAdmin? If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • PHP & MySQL on Mac OS X: Access denied for GUI user

    - by Eirik Lillebo
    Hey! This question was first posted to Stack Overflow, but as it is perhaps just as much a server issue I though it might be just as well to post it here also. I have just installed and configured Apache, MySQL, PHP and phpMyAdmin on my Macbook in order to have a local development environment. But after I moved one of my projects over to the local server I get a weird MySQL error from one of my calls to mysql_query(): Access denied for user '_securityagent'@'localhost' (using password: NO) First of all, the query I'm sending to MySQL is all valid, and I've even testet it through phpMyAdmin with perfect result. Secondly, the error message only happens here while I have at least 4 other mysql connections and queries per page. This call to mysql_query() happens at the end of a really long function that handles data for newly created or modified articles. This basically what it does: Collect all the data from article form (title, content, dates, etc..) Validate collected data Connect to database Dynamically build SQL query based on validated article data Send query to database before closing the connection Pretty basic, I know. I did not recognize the username "_securityagent" so after a quick search I came across this from and article at Apple's Developer Connection talking about some random bug: Mac OS X's security infrastructure gets around this problem by running its GUI code as a special user, "_securityagent". Then I tried put a var_dump() on all variables used in the mysql_connect() call, and every time it returns the correct values (where username is not "_securityagent" of course). Thus I'm wondering if anyone has any idea why 'securityagent' is trying to connect to my database - and how I can keep this error from occurring when I call mysql_query(). Update: Here is the exact code I'm using to connect to the database. But a little explanation must follow: The connection error happens at a call to mysql_query() in function X in class_1 class_1 uses class_2 to connect to database class_2 reads a config file with the database connection variables (host, user, pass, db) class_2 connect to the database through the following function: var $SYSTEM_DB_HOST = ""; function connect_db() { // Reads the config file include('system_config.php'); if (!($SYSTEM_DB_HOST == "")) { mysql_connect($SYSTEM_DB_HOST, $SYSTEM_DB_USER, $SYSTEM_DB_PASS); @mysql_select_db($SYSTEM_DB); return true; } else { return false; } }

    Read the article

  • How do I redirect all requests to files in the root folder to point to another folder?

    - by purpletonic
    I've moved all of my files from the root of my website into a subfolder, I'd like to do an Apache redirect to point to the files without affecting the other subfolders in my site. E.g. /index.html -- redirect to -- /subfolder1/index.html /file1.html -- redirect to -- /subfolder1/index.html /subfolder2/index.html -- No redirect Can anyone help me with the redirect rule that I need to write for this. Thanks,

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • sendmail user unknown - debian lenny

    - by Rimian
    My php's mail() function just stopped working a short while ago. It's started returning FALSE. I am not much of a sysadmin so please forgive my ignorance. I set my php.ini send_path option to: "sendmail_path = /usr/sbin/sendmail -t -i" and restarted apache. Then, I learnt how to test sendmail like so: sudo /usr/sbin/sendmail -bv [email protected] [email protected]... deliverable: mailer esmtp, host example.com., user [email protected] The example email is a real mail box. I have also seen unknown user messages in the mail log. Can anyone please help me debug this? Cheers, Rim

    Read the article

  • Looking for FTP server that allows user management from database

    - by hughesdan
    I'm planning a server application that will handle files uploaded via FTP. The application must parse text documents that it receives and write them to a database (most likely a document-oriented database like Mongo). And the application must also relay all large binary files it receives to Amazon S3 for storage and hosting. I'd like to manage all aspects of the FTP server programmatically. For example, when a user registers via a web page the application should be able to create the user account in the database and provision a directory on the server for receiving files. I'm using a Linux server but am otherwise open to considering any programming language or framework. I experimented with VSFTPD but didn't like the way the application relies on config files and the creation of users and directories via the command line. Can someone please recommend what server framework I should consider? I'm a little biased toward solutions that leverage Javascript/Nodejs or Python. However, I'm open to anything that can run on a Linux box.

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • IIS7 Windows Server 2008 FTP -> Response: 530 User cannot log in

    - by RSolberg
    I just launched my first IIS FTP site following many of the tutorials from IIS.NET... I'm using IIS Users and Permissions rather than anonymous and/or basic. This is what I'm seeing while trying to establish the connection... Status: Resolving address of ftp.mydomain.com Status: Connecting to ###.###.##.###:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER MyFTPUser Response: 331 Password required for MyFTPUser. Command: PASS ******************** Response: 530 User cannot log in. Error: Critical error Error: Could not connect to server

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >