Search Results

Search found 27581 results on 1104 pages for 'execute command'.

Page 564/1104 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • exchange server 2010 Outlook Web Access - Exchange Control Panel WEB Interface

    - by Aceth
    from what i can gather the mailbox bit of the web interface works fine.. when any of the users go to options (top right) and try to use some of the features such as the Organise Mail Delivery Reports to find messages etc... it comes up with a message .. "An item with the same key has already been added" I've looked in the event viewer and i think its this error - Watson report about to be sent for process id: 7016, with parameters: E12IIS, c-RTL-AMD64, 14.00.0639.021, ECP, ECP.Powershell, https://x.x.x.x/ecp/PersonalSettings/Accounts.svc/GetList, UnexpectedCondition:ArgumentException, c09, 14.00.0639.021. ErrorReportingEnabled: False and Request for URL 'https://x.x.x.x/ecp/PersonalSettings/Accounts.svc/GetList' failed with the following error: System.ArgumentException: An item with the same key has already been added. at System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.ExecuteSynchronous(HttpApplication context, Boolean flowContext) at Microsoft.Exchange.Management.ControlPanel.WebServiceHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I've tried googling but no luck that's relevant :(

    Read the article

  • PHP + IIS Application Pool Identity Windows\Temp permissions

    - by Matt Boothman
    I am currently running PHP (5.3) on IIS 7.5 on a Win2k8 R2 Web Edition Server and would like to know what, if any, problems or security vulnerabilities I may introduct into a system by assigning Read, Write, Modify & Execute permissions to either IUSR account or the IIS_USERS group for %SystemRoot%\Temp? Should I be altering permissions to that folder at all (as Windows reminds me I probably shouldn't when i attempt to change them)? Should I create a temp folder somewhere else and set permissions accordingly? The problem is when i set Anonymous Authentication (I'm guessing is a more secure option???) to use the App Pool identity, when starting sessions PHP gets stuck in a loop because it's unable to create session files in the %SystemRoot%\Temp folder due to lack of permission on the application pool user or IIS_USERS group. Another problem being ImageMagick (PHP Extension) is being denied access to %SystemRoot%\Temp to write temporary files so is throwing exceptions. I have tried searching Google however have not found anything that touches upon this subject specifically. Any help greatly appreciated.

    Read the article

  • MAMP not opening the correct file

    - by Praveen
    I am a first time web app developer trying to develop a web application. I downloaded MAMP and have a simple index.html file in the htdocs folder. The apache root directory is also set to htdocs folder. But for some reason it would not execute index.html when i type localhost:8888. I used to have a different folder in htdocs and now the link is always directed to localhost:8888/folder instead of localhost:8888/index.html. How can i direct the MAMP to open index.html and not that folder?

    Read the article

  • What would be the ideal SAS package to integrate with a Enterprise scale web application developed using Java

    - by Rajesh
    SAS-Java Connectivity and Integration for a enterprise class web application - I am trying to narrow down the available approaches to connect to SAS frm Java. I am asusming the following , please correct my assumption if you think is not correct SAS ACCESS (ODBC, JDBC) /SAS Share Net- Query SAS Datasets using a JDBC Model SAS/Intr Net (For Connectivity with Java and build small scale applications) SAS Integration Technology (For Connectivity with Java and build Distributed Java Applications) Now the scenario is i need to build a enterprise class web application using Java/J2EE and ensure this application talks to SAS for Querying SAS Datasets Execute SAS Programs and generate Reports I am looking for a cost effective and robust solution which will work in a Multi user environment.

    Read the article

  • Is there a way to automatically disconnect a notebook from the eletrical power-supply?

    - by Diogo
    I know this looks like weird and useless, but let me explain... I'm running Windows Assessment and Deployment Kit (Windows ADK) to make some tests on Windows 8 Preview. One of it's assessment is the "Battery Run Down Test", which tests battery consumption with some procesor load. I'm trying to "automate" in some way this test, I mean, I wish to execute it without any human intervention (such as manually disconecting the eletric power source to leave my notebook running only from batteries to run this assessment). So, there is some ACPI API, Windows API or even an easy bat shell/VBScript/Powershell command to do this? Does someone already made something like? PS: I'm asking this because I couldn't found any answer, but maybe someone here would have any tip...

    Read the article

  • Debugging logrotate postrotate script

    - by robert
    Following is my logrotate conf. /mnt/je/logs/apache/jesites/web/*.log" { missingok rotate 0 size 5M copytruncate notifempty sharedscripts postrotate /home/bitnami/.conf/compress-and-upload.sh /mnt/je/logs/apache/jesites/web/ web endscript } And compress-and-upload.sh script, #!/bin/sh # Perform Rotated Log File Compression tar -czPf $1/log.gz $1/*.1 # Fetch the instance id from the instance EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`" if [ -z $EC2_INSTANCE_ID ]; then echo "Error: Couldn't fetch Instance ID .. Exiting .." exit; else /usr/local/bin/s3cmd put $1/log.gz s3://xxxx/logs/$(date +%Y)/$(date +%m)/$(date +%d)/$2/$EC2_INSTANCE_ID-$(date +%H:%M:%S)-$2.gz fi # Removing Rotated Compressed Log File rm -f $1/log.gz The files are rotated, but shell script is not executed. I don't know how to debug the postscript. Is there any logfile I chek to see if there is any permission issues. If i directly execute the script from commandline file upload works. Thanks.

    Read the article

  • let mysql use full resources

    - by shekhar
    We have a Windows server 2008 R2. It has 24GB RAM and 2.926 Mhz, 8 Core(s), 8 Logical Process. We have a large MySQL database and it's taking a lot of time to execute some queries. But while running queries, I have observed it's NOT utilizing full resources. I am thinking that if I let MySQL utilize maximum resources on my server, it can reduce execution time. How can I let my MySQL server utilize maximum resources in a healthy way?

    Read the article

  • OWA draft folder doesn't sync up wint Outlook 2007 (when using Citrix)

    - by George
    I have a user logging in to Citrix Server (on Windows 2003) to use Outlook 2007. In OWA, he sees all his drafts in Draft Folder and can easily access them, but when he is Citrix, he can see the folder, but not the messages. I had him check Normal and Favorite Folders under View - Navigation Pane as well as execute outlook /cleanviews to no help. I should also clarify, we host exchange locally and it syncs up with Outlook 2007 in Citrix. Remote users use either OWA for access or login to Citrix and use Outlook 2007. In his case ALL folders appear in Outlook 2007, but draft folder doesn't show any saved messages, even though in OWA messages are there and he can edit, delete and send them. Please, help! Thanks!!!

    Read the article

  • Shell script for replacing string in all PHP-files, for each user

    - by Mads Skjern
    Each user has some php-files using a shared database commondb. I want to iterate over all users (in users.csv), and in their home folder (e.g. /home/joe) find all php files recursively, and replace each occurrence of "commondb" with their own databasename, e.g. "joedb" for "joe". I have tried the following: #!/bin/bash # Execute like this: # bash localize.bash users.csv OLDIFS=$IFS IFS="," while read name dummy do echo $name find /home/${name} -name '*.php' -exec sed -i '' 's/commondb/${name}db/g' "{}" \; done < $1 IFS=$OLDIFS for users.csv joe, Joe J george, George G It does not fail, but the files are unchanged. I am quite weak in bash, and I can't figure out how to debug it :/ Can my script be fixed to work?

    Read the article

  • nginx php-fpm trouble

    - by Patrick
    Hello, we're using php-fpm and have trouble getting the scripts to work if we change the 'root' value in nginx.conf. location ~ \.php$ { root /usr/share/nginx/html ; If we change that root to point to other directory, even if it's /usr/share/nginx/html/crap, it wouldn't work. The directory exists of course. It's like it can read the file in that directory, but not execute it. I've checked all file permissions. Anyone has any idea?

    Read the article

  • Can the expect utility handle a case where the process it spawns also spawns a sub process?

    - by davidparks21
    I'm trying to use expect to handle rsync over an ssh shell, but it gets stuck. If I run my rsync command it works (simplified here): It prompts me for my password and copies files to the server: rsync -e ssh -<other_params> If I then enclose that in expect: expect -d -c "spawn rsync -e ssh -<other_params>" -c "expect password:" -c "send mypass\r" It does not execute properly, the program exists and no files are copied. Even the debug mode isn't giving many clues. My best guess is that rsync is spawning the ssh process, and the ssh process is what needs to be interacted with, but send is picking up the rsync process id and sending the input there. Any thoughts?

    Read the article

  • Seeking a solution to automatically copy files from the cd-rom disk to the USB drive once it's connected.

    - by Ray Nathan
    I plan to distribute a free CD that automatically copies files to a connected usb device. This process will be done on the computers of the users that obtain the cd. The CD will contain an autorun.ini file that will instruct the computer to copy a set of files located on the cd..to a specific directory on the connected usb device. The usb drive letter is not the same on all the systems, therefore...Windows XP should automatically know the drive letter of the usb device before the copy operation begins. What would be the best way of creating a short batch file or script that I can place on the CD to execute this process? Also, please note that it is NOT feasible or recommended to include a batch file on the USB devices to sync this operation due to the explanation at the beginning of this paragraph. :) Thank You All

    Read the article

  • Is LSB's init script function "start_daemon" really used for real daemons or should I stick to start-stop-daemon?

    - by Fred
    In the context of init scripts, according to the LSB specification, "Each conforming init script shall execute the commands in the file /lib/lsb/init-function", which then defines a couple of functions to be used when using daemons. One of those functions is start_daemon, which obviously "runs the specified program as a daemon" while checking if the daemon is already running. I'm in the process of daemonizing a service app of mine, and I'm looking at how other daemons are run to try to "fit in". In the process of looking how it's done elsewhere, I noticed that not a single daemon on my Ubuntu 10.04 machine uses start_daemon. They all call start-stop-daemon directly. Same goes for my Fedora 14 machine. Should I try to play nice and be the first one to use start_daemon, or is there really no point and start-stop-daemon is the way to go since everybody is already using that? Why is there no daemons using LSB's functions?

    Read the article

  • Weblogic Threads Usage

    - by Hila
    I have an application deployed on WebLogic 10.3, which exhibits a strange behavior. I am running a constant (not too high) load on my application (20 concurrent users, running a light activity). The response time is reasonable (well below 100ms after the application stabilizes) Memory consumption seems fine (My application creates a lot of short-living objects, but they are garbaged collected so the overall memory consumption stays under 500 mb). Threads stats seem healthy as well: And yet, after I leave my test running for a while, more and more execute threads ("[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'") are created, until eventually the application crashes: This test hasn't been running for a long time (All the new threads that you don't see in the first screenshot were created while I was writing this question), and I've seen much more threads being created. Any idea why these threads are being created?

    Read the article

  • Can I automatically add a new host to known_hosts ?

    - by gareth_bowles
    Here's my situation; I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via SSH. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client. The problem I'm having is that the first SSH command run against a new virtual instance always comes up with an interactive prompt: The authenticity of host '[hostname] ([IP address])' can't be established. RSA key fingerprint is [key fingerprint]. Are you sure you want to continue connecting (yes/no)? Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.

    Read the article

  • Connect through SSH and type in password automatically, without using a public key

    - by binary255
    A server allows SSH connections, but not using public key authentication. It's not within my power to change this at the moment (due to technical difficulties, not organizational) but I will get on it as soon as possible! What I need now is to execute commands on the server using plain old account+password authentication from a script. That is, I need to do it in a non-interactive way. Is it possible? And how do I do it? The client which will be executing the script runs Ubuntu Server 8.04. The server runs Cygwin and OpenSSH.

    Read the article

  • Secure Apache PHP vhost configuration

    - by jsimmons
    I'm looking to secure some websites running under apache using suexec. At the moment php is executed with the user/group of the file being executed. This seems to me, not secure enough. It stops vhosts interfering with each other, but does not stop malicious code writing anywhere in the vhost being used. I was thinking that a possibility would be to run scripts as nobody/vhost group, that way the vhost user could still have full access to the vhost directories, but executing php would only be able to write to files with g+w, and to execute files with g+x. This I think should stop arbitrary writing in the web dir from compromised php. Just wondering if this is crazy, ridiculous, stupid? Of course this would be done on top of existing security measures.

    Read the article

  • Git can no longer open emacs as its editor

    - by mwilliams
    I'm running Git version 1.7.3.2 that I built from source, zsh is my shell, and emacs is my editor. Recently I started seeing the following: /usr/local/Cellar/git/1.7.3.2/libexec/git-core/git-sh-setup: line 106: emacs: command not found Could not execute editor My zshrc looks like the following so I can use the Cocoa build and the console binary provided with it. EMACS_HOME="/Applications/Emacs.app/Contents/MacOS" function e() { PATH=$EMACS_HOME/bin:$PATH $EMACS_HOME/Emacs -nw $@ } function ec() { PATH=$EMACS_HOME/bin:$PATH emacsclient -t $@ } function es() { e --daemon=$1 && ec -s $1 } function el() { ps ax|grep Emacs } function ek() { $EMACS_HOME/bin/emacsclient -e '(kill-emacs)' -s $1 } function ecompile() { e -eval "(setq load-path (cons (expand-file-name \".\") load-path))" \ -batch -f batch-byte-compile $@ } alias emacs=e alias emacsclient=ec And I also have export EDITOR="emacs" and have tried adding export GIT_EDITOR="emacs" (and swapping that out with "e") But whatever I try I can't get git to open emacs whenever I need to do a commit or an interactive rebase, etc etc...

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Notebook with NVIDIA Optimus not switching video card in games

    - by user140739
    I have a Samsung RC720 notebook with Intel Integrated Graphics and NVIDIA GeForce GT 520M. As you can see it has two video adapters and Optimus is supposed to switch between them. But when I choose dedicated GPU in NVIDIA Control Panel and try to run, for example, GTA IV, it uses integrated graphics and I get very poor performance. I have already installed last NVIDIA and notebook drivers, chose high-performance in NVIDIA Control Panel, tried to execute with "Run with graphics processor..." context option and so on. Thanks for help.

    Read the article

  • sudo prompts for password over ssh

    - by Joe Watkins
    I have sudo set up for a shell script as follows on "hostname" (sudo -l output): (suser) NOPASSWD: /path/script* sudoers content is: myuser ALL=(suser) NOPASSWD: /path/script* this works fine, so I can run the following, logged in locally on hostname, without need for password: sudo -u suser /path/script however, when I use ssh (with keys set up, so no password require) to login and run, as follows: ssh hostname sudo -u suser /path/script I get prompted for a password, and when the password is entered I get: Sorry, user myuser is not allowed to execute '/path/script' as suser on hostname. Why? NB the following does not prompt for password at any point: $ ssh hostname $ sudo -u suser /path/script

    Read the article

  • CGI error from PHP when running exec() on IIS

    - by Patrick
    Windows Server 2003 x64 PHP 5.2 IIS 6.0 The program Ink2Png.exe is set with Everyone-Read and Execute permissions. As does its dependency (microsoft.ink.dll) PHP Safe Mode is off exec() is passed [the full exe path], space, [full path to another file] This other file also has full read permissions. The output directory has full write permissions. As soon as exec() is hit, the connection dies, the browser does not even receive a full set of http headers, and it reports a CGI error. Examining the output, it appears the program was not even run. Any ideas? How can I figure out what exactly is happening and get it running again? EDIT: Also, it is a .NET application, if that is significant in any way.

    Read the article

  • Rar.exe CLI - add files from directory without directory structure?

    - by Vercas
    I need to grab every file in a folder and shove it into a RAR archive. This is my current method: "C:\Program Files\WinRAR\Rar.exe" a -r -md2m -s -m5 -ma4 -t ..\Releases\vCommands.rar bin\ ... Where bin is my folder. I tried this too, even though it is for another program, and the results are the same. To be clear, here's a picture: To the top-left, in the .rar file, there is the bin directory which contains all the files. To the bottom-right, in the .7z file, all those files are in the archive root. What I need is shoving all those files in the .rar archive root, instead of a folder, without having to execute my batch file inside that bin folder.

    Read the article

  • SSH authentication working unless ran from script??

    - by awright418
    I have set up my server to allow key/pair authentication by following instructions similar to what is found in this post. As far as I can tell that is working correctly. If I do the following, for example, it works correctly: ssh [email protected] It will NOT prompt me for a password. This is what I want to happen. However if I write a small bash script like this: #!/bin/bash -x ssh [email protected] and execute with: sudo ./mytestscript.sh ...it will prompt me with: [email protected]'s password: What am I doing wrong? I need to be able to login from within my script without being prompted for a password!

    Read the article

  • In 2011, what are the reasons to stick with plain text mails?

    - by Aaron Digulla
    People entering college today have never known a world without an Internet. HTML was invented 1980, that's more than thirty years ago or 1.5 generations. But plain text mails are still common despite all their problems: Encoding issues Wrapped code segments No links No way to use the "a picture says more than a thousand words" lore Most of the security risks are now handled by the underlying browser engine and smart settings like: Don't allow JavaScript in mails Don't execute attachments Don't download external resources (like web bugs) On top of that, only very few people still read mail only in command line tools like Mutt. Knowing Mutt myself, I'm pretty sure you can configure it to display HTML mail with, say, w3m. On top of that, most HTML mail capable clients send two versions of the mail (pure text with an HTML attachment). I'm not sure if there are any people left on the planet which still use a 56kbit modem to access their mail accounts. So what reasons are left to stick with plain text mails in 2011?

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >