Search Results

Search found 2587 results on 104 pages for 'acess denied'.

Page 89/104 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Run a script on user connection on the VM host

    - by Scott Chamberlain
    I have a server running a Virtual Desktop Managed Pool, what I would like to do is when a user logs in I would like a script to check the number of available VMs and if below a threashold add additional VMs to the pool. The script to check the load and add to the pool is not the problem, I have that already figured out: $collectionName = "Test1"; $rdvh = "vmHost.example.com"; $minAvailableVMs = 2; Import-Module RemoteDesktop; $pool = Get-VirtualDesktopCollection -CollectionName $collectionName; $availableVMs = $pool.Size - ($pool.Size * $pool.PercentInUse / 100); $status = Get-VirtualDesktopCollectionJobStatus $collectionName #only add new servers if we are below the threashold and in the JOB_COMPLETEED state if($availableVMs -lt $minAvailableVMs -and $status.Status -eq [Microsoft.RemoteDesktopServices.Management.VirtualDesktopCollectionJobStatus]::JOB_COMPLETED) { Add-RDVirtualDesktopToCollection -CollectionName $collectionName -VirtualDesktopAllocation @{"$rdvh" = 1} } The problem I am having is, how do I run the above script on the Virtualization Host/Connection Broker/Some other server when a user connects?. I don't think it would be appropriate to run this as a logon script inside the VM, I think there is a way to do this on the management side but I don't know the new scripting interface in Server 2012 R2 well enough to know which commandlets I should look for to schedule this. EDIT: I know System Center is perfect for this but I do not have a license and was denied when I asked for it to be added to the budget.

    Read the article

  • Create account for service

    - by Andy
    I am configuring a new server. The server is running Hudson that is going to copy some files from this server to another. The other server is a virtual machine. Both running Windows Server 2012. Hudson is started on server A with log on as "Local System". When I come to the copy phase it says "Access denied". Changing the log on to "Administrator" works. However, I guess this is bad. I do not have much experience with user management. I tried to create a own hudson account on both servers A and B. I tried to log on as hudson account in the service-management but it doesn't start. How would you create an account for this particular service that has access to the shared folder on server B and can be used to start the service on server A? I guess I need two accounts with same username and password on server A and server B? The folder on Server B is shared with everyone and the guest account is enabled.

    Read the article

  • Mac updated just now, postgres now broken

    - by user52224
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Nginx/puma rhel unix socket permission error?

    - by Kevin Brown
    When I try to start my puma server, I get the error: /.rvm/gems/ruby-2.1.1/gems/puma-2.9.0/lib/puma/binder.rb:275:in `initialize': Permission denied - connect(2) for "/var/run/nvhbase.sock" (Errno::EACCES) My sites-available/nvhbase.conf file: upstream nvhbase { server unix:/var/run/nvhbase.sock; } server { listen 80 default_server; server_name 207.131.132.219; root /home/vf032500/dev/nvh/public; location / { proxy_pass http://unix:/var/run/nvhbase.sock; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } I don't know a lot about unix sockets and everything works fine using tcp/puma default. My rails app is in my user directory. Is that the problem?? socket is starting in /var/run--I can start in /tmp, but I've heard that's bad practice? Provided I start the server in /tmp, I then can't access it via the server's ip--then what? I'm happy to provide any needed info, I just don't know a whole lot about server/nginx/puma.

    Read the article

  • Centos iptables configuration for Wordpress and Gmail smtp

    - by Fabrizio
    Let me start off by saying that I'm a Centos newby, so all info, links and suggestions are very welcome! I recently set up a hosted server with Centos 6 and configured it as a webserver. The websites running on it are nothing special, just some low traffic projects. I tried to configure the server as default as possible, but I like it to be secure as well (no ftp, custom ssh port). Getting my Wordpress to run as desired, I'm running into some connection problems. 2 things are not working: installing plugins and updates through ssh2 (failed to connect to localhost:sshportnumber) sending emails from my site using the Gmail smtp (Failed to connect to server: Permission denied (13)) I have the feeling that these are both related to the iptables configuration, because I've tried everything else (I think). I tried opening up the firewall to accept traffic for ports 465 (gmail smtp) and ssh port (lets say this port is 8000), but both the issues remain. Ssh connections from the terminal are working fine though. After each change I tried implementing I restarted the iptables service. This is my iptables configuration (using vim): # Generated by iptables-save v1.4.7 on Sun Jun 1 13:20:20 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 465 -j ACCEPT COMMIT # Completed on Sun Jun 1 13:20:20 2014 Are there any (obvious) issues with my iptables setup considering the above mentioned issues? Saying that the firewall is doing exactly nothing in this state is also an answer... And again, if you have any other suggestions for me to increase security (considering the basic things I do with this box), I would love hear it, also the obvious ones! Thanks!

    Read the article

  • Cacti Login Page: Infinite loop occurs

    - by beicha
    Apache 2.2.3 | PHP 5.1.6 | MySQL 5.0.77 I followed cacti installation guide to install latest cacti 0.8.7h on CentOS 5.5 (64-bit). The installation of PHP/Apache/MySQL went smoothly until I finished the setup, and came to the login page. I can login http://.../cacti/index.php with admin account but the new page is redirected to the same login page with the message "Please enter your Cacti user name and password below" This is a infinite loop! If I use a wrong admin password I get the correct error message "Invalid User Name/Password Please Retype". [Same problem here] If I login use Guest/guest account, "Error: Access Denied, user account disabled." displays. The Cacti log file (./cacti/log/cacti.log) is empty. I Googled and seems this problem has existed for a long time, but no followup solutions were found on the forum posts I found. Anyone can help me on this problem? If more information needed, please let me know. Nov 18, 2011 UPDATE: I re-installed Cacti, this question remains UNSOLVED.

    Read the article

  • How to change mount to grant user write permissions?

    - by nals
    I am on TomatoUSB, and using the feature to have a NAS. The only way I can write to the Samba share is if I force root: [global] interfaces = 127.0.0.1, 192.168.1.1/24 bind interfaces only = no workgroup = WORKGROUP netbios name = TOMATO security = share wins support = yes name resolve order = wins lmhosts hosts bcast guest account = nobody [Public] path = /mnt/sda2 read only = no public = yes only guest = yes guest ok = yes browseable = yes comment = Network share force user = root writeable = yes I dont really like the idea having to use root to allow write access to my share. I have a samba account created already named nobody to allow access to the share. However every time I try to write I get access denied error. fstab: /dev/sda2 /mnt/sda2 vfat defaults 0 0 Further more every time I try to chmod 777 /tmp/mnt/sda2 the permissions are not changed, and no error is produced. They stay 755. drwxr-xr-x 2 root root 4096 Jun 4 01:49 sda2 Basically; how can I give the user nobody write permissions to my mount? dev name: /dev/sda2 dev mount: /tmp/mnt/sda2

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • CIFS Mounting Permissions

    - by malco
    I have an issue that I;m going round in circles with, I hope you can help. The Set up: Server 1 (CIFS Client) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad Server 2 (CIFS Server) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad All users (apart from root) are AD authenticated and this, including groups, etc works happily. What's working: I have created a share on Server 2: [share2] path = /srv/samba/share2 writeable = yes Permissions on the share: drwxrwx---. 2 root domain users 4096 Oct 12 09:21 share2 I can log into a Windows machine as user5 (member of domain users) and everything works as it should, for example: If I create a file it shows the correct permissions and attributes on both the MS and the Linux sides. Where I Fall Down: I mount the share on Server 1 using: # mount //server2/share2 /mnt/share2/ -o username=cifsmount,password=blah,domain=blah Or using fstab: //server2/share2 /mnt/share2 cifs credentials=/blah/.creds 0 0 This mounts fine, but.... If I log su, or log onto server 1 as a normal user (say user5) and try to create a file I get: #touch test touch test touch: cannot touch `test': Permission denied Then if I check the folder the file was created but as the cifsmount user: -rw-r--r--. 1 cifsmount domain users 0 Oct 12 09:21 test I can rename, delete, move or copy stuff around as user5, I just can't create anything, what am I doing wrong? I'm guessing it's something to do with the mount action as when I log onto server2 as user5 and access the folder locally it all works as it should. Can anyone point me in the right direction?

    Read the article

  • Apache config: Permissions, Directories and Locations

    - by James Murphy
    I'm trying to get my head around apache configuration to fix a problem I'm having but after a few hours I've decided to ask here. This is what I've got at the moment: DocumentRoot "/var/www/html" <Directory /> Options None AllowOverride None Deny from all </Directory> <Directory /var/svn> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Directory /opt/hg> Options FollowSymLinks AllowOverride None Allow from all </Directory> <Location /hg> AuthType Digest AuthName "Engage HG" AuthDigestProvider file AuthUserFile /opt/hg/hgweb.users Require valid-user </Location> WSGISocketPrefix /var/run/wsgi WSGIDaemonProcess hg processes=3 threads=15 WSGIProcessGroup hg WSGIScriptAlias /hg "/opt/hg/hgweb.wsgi" <Location /svn> DAV svn SVNPath /var/svn/repos AuthType Basic AuthName "Subversion" AuthUserFile /etc/httpd/conf/users require valid-user </Location> I'm trying to get my head around how it's all laid out and how directories relate to locations/etc For /hg I get asked for a password but to /svn I get a 403 forbidden... the error I get is: [client 10.80.10.169] client denied by server configuration: /var/www/html/svn When I remove the entry it works fine.. I can't figure out how to get it linking to the /var/svn directory

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Error in mysql "max_allowed_packet" VPS godaddy

    - by focusmantra
    I am getting error "Serious session error detected. Please notify administrator, this problem is most probably caused by small value in max_allowed_packet MySQL setting. " This error generally comes after every 20-25 minutes and when it comes , it logs out the user and then logs in again, starts again and then after sometime the same issue occurs again. I tried changing max_allowed_packet setting but getting error "access denied; You need SUPER privilege for this operation'. I even tried SET SESSION too but error "SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value" I have hosted the website on godaddy VPS centos and access it via putty or cpanel. Website is made in moodle 2.0.3 i.e. php. My developers use to fix this but warned will occur when server restart. As godaddy ppl say move to dedicated and then i can do but as I don't have any money so can't at present. I trying to find how developers used to do for temporary fix that is until server restart.

    Read the article

  • Bad HD and robocopy

    - by acidzombie24
    After my HD gave me crc errors I wanted to copy one drive to another and picked up a new 1TB hd. I am using the command robocopy G: J: /MIR /COPYALL /ZB First it tried copying the file a few times (i didnt count, its not in my window anymore) and got an access denied error, error 5. Then it tried again and locked up. I tried copying that specific file (14mb) and windows says "cant read from source file or disk". I started robocopy again. Hopefully it will ignore it after a fail attempt or two but what options can i use to say if it doesnt work continue to the next file. It looked like that is what it was doing but for this last one it repeated more then 4times and locked up finally. I'm open to other copy solutions. I do prefer built in solutions. I am using windows 7 Also how might i do this without the /MIR option. Is /S /E good enough? flag reference here -edit- i see i can control retries with /R: but i am still open to alternative solutions.

    Read the article

  • What could cause Windows 7 to hang whenever I install something?

    - by Larsenal
    I've had this problem when installing several different programs (iTunes, Adobe Acrobat Reader just to name two). Regardless of what the program is, the install usually gets at least 90% through the process and then just hangs. I don't see anything bad in the event log besides the following (and this didn't occur exactly at the time of install): wuaueng.dll (964) SUS20ClientDataStore: A request to write to the file "C:\Windows\SoftwareDistribution\DataStore\DataStore.edb" at offset 16252928 (0x0000000000f80000) for 32768 (0x00008000) bytes succeeded, but took an abnormally long time (185 seconds) to be serviced by the OS. This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. I've run check disk and it passed. I've had some problems with BIOS settings in the past with Windows 7, but I'm not sure whether that could be related. Update... I also see this error in the event log: Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0x80070005, Access is denied. . This is often caused by incorrect security settings in either the writer or requestor process. Operation: Gathering Writer Data Context: Writer Class Id: {e8132975-6f93-4464-a53e-1050253ae220} Writer Name: System Writer Writer Instance ID: {33493f01-ac1b-4efb-a378-3053ab03100d} One last wrinkle.... I see "Previous versions" of c:\ which look like they correspond to the time of attempted installation.

    Read the article

  • What permissions do I need to move a folder?

    - by isme
    In the root of my drive there exists a folder called SourceControl that contains all the working copies of all my programming projects. I would like to move the folder to my user directory (\Users\Me), but something about the permissions on the folder forbids me. I don't remember how I created the folder. When I execute the move command: MOVE \SourceControl \Users\Me I receive the following error: Access is denied. I have resolved a similar problem in the past using the Takeown utility to assign ownership of the file to me, so I tried this command next: TAKEOWN /F \SourceControl It returns the following error: ERROR: The current logged on user does not have ownership privileges on the file (or folder) "C:\SourceControl". I've just learned about the Icacls utility, which can inspect and modify file permissions. I used this command to inspect the permissions on the folder: ICACLS \SourceControl It produced this list: \SourceControl BUILTIN\Administrators:(I)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) NT AUTHORITY\SYSTEM:(I)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) NT AUTHORITY\Authenticated Users:(I)(M) NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M) I think this means that normal user accounts, like mine, have permission only to read and execute (RX) here, while administrator accounts have full control (F). I used Icacls to confer full control of the directory to my user account with this command: ICACLS \SourceControl /grant:r Me:F The command produces this output: processed file: \SourceControl Successfully processed 1 files; Failed processing 0 files Now inspection of the permissions produces this output: \SourceControl Domain\Me:(F) BUILTIN\Administrators:(I)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) NT AUTHORITY\SYSTEM:(I)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) NT AUTHORITY\Authenticated Users:(I)(M) NT AUTHORITY\Authenticated Users:(I)(OI)(CI)(IO)(M) But after this the move command still fails with the same error. Is it possible to move this folder without invoking administrator rights? If not, how should I do it as administrator?

    Read the article

  • Problem running “Central Administration” website after windows update at Windows 2003 Server Standar

    - by Magdy Roshdy
    I was have WSS 2.0 and then I upgraded to WSS 3.0 and the old instalation database was SQL 2000, now I have another SQL Server instance called:server_name\MICROSOFT##SSEE . After upgrade every thing works fine and our team started to use the portal and we sent lot of documents and make lot of activities on it. The problem started after installing Windows updates the website suddenly stopped and giving me an error "Cannot connect to the configuration database" If I tried to open SharePoint Products and Technologies Configuration Wizard it is gives me a strange error says: "An exception of type Microsoft.SharePoint.PostSetupConfiguration.PostSetupConfigurationTaskException was thrown. Additional exception information: SharePoint Products and Technologies cannot be configured. The current installation mode does not support SKU to SKU upgrades because there exists an older version of Windows SharePoint Services that must be upgraded first " At this post:http://stackoverflow.com/questions/114398/iis-error-cannot-connect-to-the-configuration-database/249494#249494 the guy of the second answer have the same problem and he suggested a solution but I don't understand well. I tried as he suggested to make the identity of the app pool of the SharePoint web site as "IWAM_server_name " after that the error changed as he said and I web site give me "Server Application Unavailable " and when checked the Event Viewer at the server I found that ASP.NET 2.0 give this exception: "Could not load file or assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Access is denied ." and I don't know how to solve this problem. I'm really want to make my web site working because our team really need these documents and its stuff. I hope I will find some one to help me.

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • Strange ssh key issue

    - by user55714
    Scenario 1. I am doing this from /home/deploy directory I am trying to set up ssh with github for capistrano deployment. this has been an absolute nightmare. when I do ssh [email protected] as the deploy account I get Permission denied (publickey). so may be the key is not being found, so If I do a ssh-add /home/deploy/.ssh/id_rsa Could not open a connection to your authentication agent. (i did verify that the ssh-agent was running) If I do exec ssh-agent bash and then repeat the ssh-add then the key does get added and I can ssh into github. Now I exit from the ssh connection to my server and ssh back in and I can't ssh into github anymore! Scenario 2 if I login to my remote server and then cd into my .ssh directory and ssh into github then it all works fine I guess there is a problem with locating the key and for some reason the agent isn't funcitoning correctly. Any ideas? Her is a pastie with more details..my .bashrc, permissions etc. http://pastie.org/pastes/1190557/

    Read the article

  • LDAP groups not applying to filesystem permissions

    - by BeepDog
    System is ArchLinux, and I'm using nss-pam-ldapd (0.8.13-4) to connect myself to ldap. I've got my users and some groups in LDAP: [root@kain tmp]# getent group <localgroups snipped> dkowis:*:10000: mp3s:*:15000:rkowis,dkowis music:*:15002:rkowis,dkowis video:*:15003:transmission,rkowis,dkowis,sickbeard software:*:15004:rkowis,dkowis pictures:*:15005:rkowis,dkowis budget:*:15006:rkowis,dkowis rkowis:*:10001: And I have some directories that are setgid video so that the video group stays, and they're configured g=rwx so that members of the video group can write to them: [root@kain video]# ls -ld /srv/video drwxrwxr-x 8 root video 208 Oct 19 20:49 /srv/video However, members of that group, say dkowis cannot write into that directory: [root@kain video]# groups dkowis mp3s music video software pictures dkowis Total number of groups that dkowis is in is like 7, I redacted a few here. [dkowis@kain wat]$ cd /srv/video [dkowis@kain video]$ touch something touch: cannot touch 'something': Permission denied [dkowis@kain video]$ groups dkowis mp3s music video software pictures I'm at a loss as to why my groups show up in getent groups, but my filesystem permissions are not being respected. I've tried making a new directory in /tmp and setting it's group permissions to rwx, and then trying to write a file in there, it doesn't work. The only time it does work is if I open it wide up allowing o=rwx. That's obviously not what I want, and I'm not able to figure out what my missing piece is. Thanks in advance.

    Read the article

  • Random "not accessible" "you might not have permission to use this network resource"

    - by Jim Fred
    A couple of computers, both Win7-64 can connect to shares on a NAS server, at least most of the time. At random intervals, these Win7-64 computers cannot access some shares but can access others on the same NAS. When access is denied, a dialog box appears saying "\\myServer\MyShare02 not accessible...you might not have permission to use this network resource..." Other shares, say \\myServer\MyShare01, ARE accessible from the affected computers and yet other computers CAN access the affected shares. Reboots of the affected computers seem to allow the affected computer to connect to the affected shares - but then, getting a cup of coffee seems to help too. When the problem appears, the network seems to be ok e.g. the affected computers can access other shares on the affected server and can ping etc. Also Other computers can access the affected shares. The NAS server is a NetGear ReadyNas Pro. The problem might be on the NAS side such as a resource limitation but since only 2 Win7-64 PCs seem to be affected the most, the problem could be on the PC side - I'm not sure yet. I of course searched for solutions and found several tips addressing initial connection problems (use correct workgroup name, use IP address instead of server name, remove security restrictions etc) but none of those remedies address the random nature of this problem.

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Can SSH into remote server but can't SCP?

    - by ArtfulDodger2012
    I can SSH into remote server just fine using private key authentication with prompt for passphrase. However I'm getting permission denied when I try to SCP a file using the same passphrase. Here's my output: $ scp -v [file] [user]@[remoteserver.com]:/home/[my dir] Executing: program /usr/bin/ssh host [remoteserver.com], user [user], command scp -v -t /home/[my dir] OpenSSH_5.3p1 Debian-3ubuntu7, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /home/[my dir].ssh/config debug1: Applying options for [remoteserver.com] debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to [remoteserver.com] [[remoteserver.com]] port 22. debug1: Connection established. debug1: identity file /home/[user]/.ssh/aws_corp type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debian-3ubuntu7 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu7 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '[remoteserver.com]' is known and matches the RSA host key. debug1: Found key in /home/[my dir]/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/[my dir]/.ssh/aws_corp debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> Enter passphrase for key '/home/[my dir]/.ssh/aws_corp': debug1: read PEM private key done: type RSA Connection closed by [remote server] lost connection I've searched for answers but can't find quite the same problem or am just being thick. Either way any help is much appreciated. Cheers!

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

  • 2012 R2 services will not start after promotion to Domain Controller

    - by Cybersylum
    Having a peculiar issue promoting a Windows 2012 R2 server in a domain at 2003 domain/forest functional level. Built a new 2012 R2 server, added the following software (labtech, appassure, eset A/V, & Teamviewer). It activated and appeared to be working fine. I added the Active Directory Domain Services role, and completed the configuration (Domain/Forest Prep, and DC promotion). All appeared to go well. I rebooted the server, and that's where the peculiar stuff began. I noticed the server indicated it needed activated again; but would not accept the key. I verified the key was good. That's when I noticed the Software Protection service (as well as many other core services - Base Filtering engine, DHCP client, firewall, etc) would not start. The error message for all of them was "Access Denied". I called MS, and they wanted to troubleshoot at a service level. Their fix was to use procmon and identify the resource that needed permissions (registry key, file or folder) and add "everyone" with full control). That got the services to start; but the problem re-appeared after a reboot. Thinking the issue might have been with the anti-virus package during the promotion process, I rebuilt the DCs from scratch and removed the metadata from AD (as I could not demote the machines "rpc server unavailble"). I tried to promote the newly built machines again. The only changes to the brand new machines being critical updates. Again the promotion appeared to work fine; but upon reboot (and a long wait to allow replication to occur) similar problems began to re-appear. I have verified that the schema updates are correct (schema version is 69 - for Windows 2012 R2). I am not finding much about this issue through my own searches, so I thought I would post this to see if anyone else has seen anything similar...

    Read the article

  • How to get back the themes feature in Windows XP?

    - by Martín M.
    When I try to set a visual style in Windows XP (the standard Luna, for example), I get one of these two: "Access denied" error. It works, but when I restart the computer, I get the Classic look again, with no errors. Also, the "Windows and icons" dropdown is grayed out in the "Appearance". This is a list of things I have tried, with no results: Making sure "Use visual styles on windows" is checked on System Properties Advanced Performance. Restarting the "Themes" service. It starts cleanly, no errors. Applying these two fixes: Kelly's Corner and tweaks.com. Running sfc /scannow and checking the integrity of uxtheme.dll against a clean installation of XP Restoring the whole \Windows\Resources\Themes directory. Creating a new user. The new user does not seem to suffer this problem. Maybe this is the solution, create a new user and migrating all the data, but it would be a pain, and I would prefer reinstalling the whole thing. I am using Windows XP Professional SP3, with no spyware, no virus, and no other visible malfunctions. How can I fix this?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >