Search Results

Search found 16602 results on 665 pages for 'directory'.

Page 581/665 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • disable mystery programs running at startup

    - by pstanton
    Hi and sorry for the ambiguous title... I have a few programs that should run at startup which are 'properly' configured to do so via adding shortcuts to the startup directory: C:\Users\[me]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup However I have (at least) 4 programs which are also starting up, which I can't find where they are configured or how to disable them. I have tried to find them in the above folder, as well as in the 'startup' section of 'msconfig'. The programs include: Skype (for which I have disabled 'start when windows starts' in its options) Thunderbird (for which I cannot find any option to run-at-startup) Task manager (as above) and some anonymous call to javaw (can't find any more details but it fails anyway) The other strange thing is that it seems like these (at least skype and thunderbird) are running 'as administrator' ... i have deduced this because I am unable to use the file-drag-and-drop feature in both (which is a known problem when running 'as administrator'). If someone could guide me to where these extra programs are configured to run-at-startup I would be very greatful! ps. my user account has the administrator role. EDIT: preferrably without another 3rd party tool...

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • environment variables generated by at command

    - by Jordan Arseno
    I'm inspecting /var/spool/cron/atjobs/a001cf01570e44 with cat, after running the at command from PHP using exec(). It looks like at has prepended the script with lots of APACHE environment variables. #!/bin/sh # atrun uid=33 gid=33 # mail www-data 0 umask 22 APACHE_RUN_DIR=/var/run/apache2; export APACHE_RUN_DIR APACHE_PID_FILE=/var/run/apache2.pid; export APACHE_PID_FILE PATH=/usr/local/bin:/usr/bin:/bin; export PATH APACHE_LOCK_DIR=/var/lock/apache2; export APACHE_LOCK_DIR LANG=C; export LANG APACHE_RUN_USER=www-data; export APACHE_RUN_USER APACHE_RUN_GROUP=www-data; export APACHE_RUN_GROUP APACHE_LOG_DIR=/var/log/apache2; export APACHE_LOG_DIR PWD=/home/jordanarseno/webroot/public_html/myapp; export PWD cd /home/jordanarseno/webroot/public\_html/myapp || { echo 'Execution directory inaccessible' >&2 exit 1 } curl -k http://localhost/myapp/crons/this_action/3 The last line is the only real command I sent along with at via stdin. What is the purpose of these variables? Where is this procedure stored?

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • How to copy a floppy boot disk?

    - by Sammy
    I have a floppy boot disk and I would like to copy it to preserve it, as a backup. If I have two floppy drives, A and B, how can I copy the disk? Assuming one has two floppy drives Can I simply insert the floppy disk in one of the drives and then an empty floppy disk in the other and issue a simple command like this one. A:\>copy . b: Will this only copy the contents of the current directory and none of the files in subdirectories? Do I have to explicitly specify the option to copy everything? Also, what about the boot information? That won't get copied, right? If one has only one floppy drive... How do you copy a floppy disk if you only have one floppy drive? Do you in fact have to copy its contents to the local hard drive C and then copy that to an empty floppy disk using the same floppy drive? A:\>copy . c:\floppydisk A:\> A:\>c: C:\> C:\>copy floppydisk a: C:\> I'm guessing I will need some type of disk image tool to really copy everything on a bootable floppy disk. Something like the dd command on Linux perhaps? Am I right?

    Read the article

  • PHP Web Server Solution (Apache/IIS)

    - by njk
    I apologize if this is too broad or belongs on Super User (please vote to move if it does). I'm in the process of creating requirements for an internal PHP web server to submit to our architecture team and would like to get some insight whether to use a Windows or *nix platform and what applications would be required. The server will host a small PHP application that will be connecting to SQL Server. The application will need to send mail. We would also like to incorporate a FTP server to allow files to be dropped in. From what I've read regarding a Windows platform using IIS, it seems as though IIS would only be advantageous if using a .NET or ASP application. Does IIS have mail functionality? Or how is mail traditionally configured (esp. on *nix)? Also, does IIS have directory configuration functionality like Apache does with .htaccess? For a Windows based solution; IIS (comes with FTP) Apache (has mod_ftp module) For a *nix based solution; Apache

    Read the article

  • Folder with files shows as empty

    - by ProfKaos
    Last night I create a new project in Visual Studio 2010, in the folder c:\development\hobby\devices. When I navigate to that folder in a new instance of Windows Explorer, I get the 'This folder is empty.' message in the files pane. If I then, from within Visual Studio, issue the "Open containing folder command", explorer opens to the folder c:\development\hobby\devices\LoopBack, where loopback is a project under the main folder. If I paste that path into another new instance of explorer, explorer correctly opens c:\development\hobby\devices. How is this possible? SOLVED: This is what comes of typing paths instead of selecting them. Some elementary detective work yielded the following interesting results: C:\>dir dev*. Volume in drive C is OS Volume Serial Number is E4B4-0563 Directory of C:\ 2010/11/28 09:20 PM <DIR> Development 2010/12/01 08:57 PM <DIR> Develpment 2010/11/02 06:31 PM <DIR> DevTools 0 File(s) 0 bytes 3 Dir(s) 63 965 368 320 bytes free Where my disappearing project was in the badly spelt one.

    Read the article

  • Run a shell script using cron

    - by Blanca
    Hi! I have this FeedIndexer.sh: #!/bin/sh java -jar FeedIndexer.jar Just to run FeedIndexer.jar which is in the same directory as the .sh, I would like to run it using crontab, so I did this: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 01 01 * * * root run-parts --report /home/slosada/workspace/FeedIndexer/target/FeedIndexer.sh # But I don't know how to run it. Have i made any mistake?? Thank you!

    Read the article

  • PHP 5.3.5 Windows installer missing php_ldap.dll

    - by nmjk
    I'm working with Windows Server 2008, Apache 2.2. I'm using php-5.3.5-Win32-VC6-x86.msi as the installer, using the threadsafe version. I've gone through the install process four or five times just to make sure that I'm not missing anything ridiculous, but I don't think I am. The problem is that the php_ldap.dll extension simply doesn't seem to exist. It's not present in the installer interface (where the user is asked to choose which extensions to install), and it definitely doesn't appear in the ext/ directory after install. I found a lot of mentions of this issue for 5.3.3, including links to download the extension individually. Those links no longer exist, of course, and besides: they were for 5.3.3. I'd really rather use an extension that belongs with PHP 5.3.5. Anyone else encounter this problem? Any ideas as to what's going wrong? Anyone seen acknowledgement by the PHP folks that the file is indeed missing, and that it's an oversight? It's quite a frustration because the server I'm building has no purpose if I don't have PHP LDAP support. Cheers all, and thanks in advance for your assistance.

    Read the article

  • Network external hard drive reports not enough free space

    - by mzhang
    I'm running an Ubuntu (10.04) Samba server on a local network. The server has a 50GB internal drive with only 24MB free. I've shared a folder /samba from that drive. I also have a 1TB NTFS external hard drive mounted to the system. There is a symbiotic link from the Samba shared folder on the nearly-full internal drive to the plenty-of-free-space external drive (i.e. /samba/external_hd). I wish to copy a 3.25GB folder into the (remote) external hard drive, via a Mac (10.6.8). The Mac reports (correctly) that there's 24MB free on the server, and so will not let me copy the folder on the Mac over to the external drive (dragging the folder into /samba/external_hd), failing with a "server does not have enough free space" error. However, it seems that I can still scp the folder into the external drive, via the symbolic link. Is there a reason as to why this is happening (and are there any ways to prevent it)? Is this even good practice (to mount a drive and link into the directory)?

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

  • SVN very slow over HTTP (seems auth related)

    - by Sydius
    I'm using SVN version 1.6.6 (r40053) via the command-line in Ubuntu 10.04 and connecting to a remote repository over HTTP that is in the local network. For a while, it worked fine, but has recently become very slow for any operation that requires communication with the repository, however it does eventually work after several minutes (~3m for svn up). Looking at Wireshark, it appears to be taking a full minute between the HTTP auth denied and the subsequent request containing credentials. The issue is local to my machine because other coworkers running Ubuntu are not having the issue and I've tried using my credentials from another machine and it was very fast. I tried deleting the .subversion folder in my home directory and checking everything out fresh, but it didn't help. Update: I think it's auth related. When I check out SVN repositories off of the Internet over HTTP (from Google Code, for example), everything is very fast until I do something that requires a password. Before prompting for the password for the first time, it stalls for at least a minute. Update 2: I set the neon-debug-mask in the SVN settings (in /etc/subversion/servers under [Global]) to 138 and it seems to spending a lot of time on 'auth: Trying Basic challenge...'

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • apache front-end rewriting URL to different https ports?

    - by khedron
    Hi all, One of my users is having some trouble with forwarding to an internal web app from a public address. Everything worked fine for him when the situation was like this: front page: http://www.myexample.com/ public ref to internal app: http://www.example.com/app-8903/app.html secretly goes to: http://secret.example.com:8903/app-8903/app.html This is to say, my user is providing the very last URL, with the port information duplicated in the URL base, and they were using that to give a public face that hid both the port and the internal machine name. You could still read the port in the URL base if you looked, but the obvious reference and machine name were hidden. Doing it this way, he could have several different instances of the application running on secret.example.com with different ports, and on the front end it just looked like it was changing the URL directory/base. Now the user wants to do the same thing over https:, and the people helping him with apache config say it can't be done. Is that so? Without being there to tinker with the configuration myself, I'm not sure what his IT people have tried, but reading through the apache2 SSL FAQ and other docs, it seems like it should be possible to rewrite URLs to different ports and still use https:.

    Read the article

  • Apache form authentication issues

    - by rfcoder89
    I am trying to authenticate users through Apache using the form authentication method to restrict https requests to a certain folder. Although, regardless of whether the correct login details are provided it keeps reloading the same page except the url has the form values embedded in it instead of redirecting to the appropriate page. I need to use the form authentication type instead of basic so I can write my own html for the user to login. I am using Apache 2.4.9 and this is our current configuration. Apache config file <Location C:/wamp/www/directory> SetHandler form-login-handler AuthFormLoginRequiredLocation https://localNetwork.com/username/TestBed/HTML/login.html AuthFormLoginSuccessLocation https://localNetwork.com/username/TestBed/HTML/test.html AuthFormProvider file AuthUserFile "C:/wamp/passwords" AuthType form AuthName realm Session On SessionCookieName session path=/ SessionCryptoPassphrase secret </Location> And in the login html page I've added that for the user to login <form method="POST" action="/test.html"> User: <input type="text" name="httpd_username" value="" /> Pass: <input type="password" name="httpd_password" value="" /> <input type="submit" name="login" value="Login" /> </form>

    Read the article

  • How can I recover my data from a damaged hard drive?

    - by krk
    a few days ago when I was working on windows my laptop was beaten on the side where the hard drive is located. As a result, it was damaged and I couldn't access the windows partition. I had to boot the linux one, which is working without any trouble. I have 2 partition formatted with ntfs, the one with windows on it, and the other one intended to store data. I mounted the windows partition from ubuntu and I could see all my files. But when I tried to mount the data partition it was impossible. It threw me an error, it couldn't recognize ntfs partition. I try to copy the damaged disk into an external hard drive using the command: dd if=/dev/sda of=/dev/sdb conv=noerror,sync The progress stopped at 60%. I was still unable to mount the data partition. Now I'm trying to backup my files using an utility called Photorec. The problem is that it is recovering my files in a disorderly way, it is all mixed up and I need my original directory structure, it will become an endless task to organize the files as they were before. Is there any way I can get my partition back?

    Read the article

  • File upload folder permission fastCGI - How to make it writeable?

    - by user6595
    I am using centos 5.7 with cPanel WHM running fastcgi/suEXEC I am trying to make a particular folder writable to allow a script to upload files but seem to be having problems. The folder (and all recursive folders) I want to be writable is: /home/mydomain/public_html/uploads And I want only scripts run by the user "songbanc" to be able to write to this directory. I have tried the following: chown -R songbanc /home/mydomain/public_html/uploads chmod -R 755 /home/mydomain/public_html/uploads But it still doesn't seem to work. The script will only upload files if I set the permissions manually via FTP client to 777. I assume I am misunderstanding how to set permission for users with fastcgi and hopefully someone can help me. Thanks in advance EDIT: Running getfacl on one of the scripts or folders gives the following: # file: home/mydomain/public_html/ripples/1.jpg # owner: songbanc # group: songbanc So it appears that the owner is correct? I'm now totally confused! EDIT 2: The plot thickens... lsattr and chattr are returning Inappropriate ioctl for device While reading flags on...

    Read the article

  • Using Openfire for distributed XMPP-based video-chat

    - by Yitzhak
    I have been tasked with setting up a distributed video-chat system built on XMPP. Currently my setup looks like this: Openfire (XMPP server) + JingleNodes plugin for video chat OpenLDAP (LDAP server) for storing user information and allowing directory queries Kerberos server for authentication and passwords In testing with one set of machines (i.e. only three), everything works as expected: I can log in to Openfire and it looks up the user information in the OpenLDAP database, which in turn authenticates my user with Kerberos. Now, I want to have several clusters, so that there is a cluster on each continent. A typical cluster will probably contain 2-5 servers. Users logging in will be directed to the closest cluster based on geographical location. Something that concerns me particularly is the dynamic maintenance of contact lists. If a user is using a machine in Asia, for example, how would contact lists be updated around the world to reflect the current server he is using? How would that work with LDAP? Specific questions: How do I direct users based on geographical location? What is the best architecture for a cluster? -- would all traffic need to come into a load-balancer on each one, for example? How do I manage the update of contact lists across all these servers? In general, how do I go about setting this up? What are the pitfalls in doing this? I am inexperienced in this area, so any advice and suggestions would be appreciated.

    Read the article

  • where is my disk space?

    - by user166241
    I recently had a problem with .xsession-errors file - it became very big ( 90GB) and took all disk space: How I can check what takes disk space in /tmp?. I cleaned it with command > .xsession-errors but after an hour it became large again. So I deleted it (rm .xsession-errors) - it helped because it wasn't recreated but again after hour disk space disappeared - now there is no .xsession-errors anymore but I don't know where is the memory: df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 106640456 101223392 4 100% / udev 8166744 8 8166736 1% /dev tmpfs 3270224 972 3269252 1% /run none 5120 0 5120 0% /run/lock none 8175552 152 8175400 1% /run/shm du -sc * .[^.]* | sort -n 0 initrd.img 0 initrd.img.old 0 proc 0 sys 0 vmlinuz 0 vmlinuz.old 4 cdrom 4 lib64 4 media 4 mnt 4 selinux 8 dev 12 srv 16 lost+found 68 tmp 1124 run 3396 lib32 5164 .rpmdb 5540 root 8888 sbin 9120 bin 17132 etc 106080 opt 116956 boot 861908 lib 3530584 usr 3821836 var 13371260 home 21859112 total So there is around 100GB used but executing du -sc * .[^.]* | sort -n in root directory finds only ~21 GB - so what takes 80GB?? How to check it? I suspect that when I deleted the `.xsession-errors' file the errors were redirected somwhere else - but where?

    Read the article

  • Web Server slows down (ASP.NET)

    - by mfeingold
    below is a question I posted on stackoverflow . as suggested by Martin Clarke I also post it here. We have a really strange problem. One of the servers in the server farm becomes really slow. We see a number of timeouts in the logs and overall response time is not where it should be (and is on other servers in the farm). What is also strange is that it is not just the web app - Just logging into the server takes up to 1.5 min to show you the desktop. Once you are in, the system is as responsive as ever - unless you try to launch something, i.e. notepad - it takes another minute to launch and after launch it works fine. I checked a number of things - memory utilization is reasonable, CPU is below 15%, windows handles, event logs do not show anything. Recycling the aps.net process does not fix it - it still takes over a minute to log in. Rebooting the server helped, but now it started to slow down again. After a closer look we found out that Windows Temp directory is full of temp files - over 65k files. This is certainly something to take care of. But my question is could it be the root cause of the sluggishness, or there is still something else lurking in the shadows? Edit After more digging I am zeroing in on the issue related to the size of temp directories. This article: (see the original post this thing will not let me include a second link) describes something very similar. It still does not answer the question why the server is still slow even there is no activity.

    Read the article

  • Enterprise level Ticketing and inventory system reccomendations [closed]

    - by TrackingSystem
    My company is sort off at a stand still when it comes to our technician ticketing and inventory system. We currently use Numara TrackIt! - which isn't cutting it to say the least. Dell recommended KACE, but it's web based which is what we would like to avoid. We need a good ticketing and inventory system with the following: Server/Client setup Client supports XP/Windows 7 Ent. Web Based as well as Client is a plus Technician ticketing Active Directory integration Inventory System (Asset tag tracking etc/PO tracking) Exchange integrated - when tickets are made you have an option to send to the requester. Something that will scale well Please, if anyone is a Systems Admin or has knowledge regarding use of a great ticketing system please let me know. We have a large international corporation - price honestly isn't an issue. Keep in mind this will be mainly used for technicians to create tickets, enter inventory(track PCS) and possible even an option to track purchase orders. We want an enterprise level ticketing system with these capabilities please help! Thank you.

    Read the article

  • Get more space for redis in unix server

    - by DevTraveler
    In our app we have only windows servers beside the cases we use redis queues. This case , we use unix server created by amazon. As you can see we do not have a lot of space available and we want to make sure redis has enough space to work without getting stuck. I am little bit new in unix and after reading some data about the unix file system i still was not fully sure how can i give the redis drive (it is in the home directory) more space. I see the mnt that has a lot of space but read it is temporary for cd-rom and network drives. Can you help me figure out how to get more space to my redis ? If it is possible i prefer not to re-install the redis server. Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.7G 880M 89% / udev 7.4G 4.0K 7.4G 1% /dev tmpfs 3.0G 152K 3.0G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.4G 0 7.4G 0% /run/shm /dev/xvdb 414G 30G 364G 8% /mnt Thanks.

    Read the article

  • FTP timeout but SSH is working?

    - by nmarti
    I have a problem in my server, when I try to connect via FTP to a domain, the connexion is VERY slow, and I get timeouts just listing files in a directory. When I try to connect to the domain folder using the root user account via SSH, it works fine, and I can download the files without problem. What can be wrong? I tried to reboot the server, also the office router, and nothing... It is a fedora core 7 server with proftpd. Can it be a filesystem problem? Thanks. CONNECTION LOG: Cmd: MLST about.php 250: Start of list for about.php modify=20120910092528;perm=adfrw;size=2197;type=file;UNIX.group=505;UNIX.mode=0644;UNIX.owner=10089; about.php End of list Cmd: PASV 227: Entering Passive Mode (***hidden***). Data connection timed out. Falling back to PORT instead of PASV mode. Connection falling back to port (PORT) mode. Cmd: PORT ***hidden*** 200: PORT command successful Cmd: RETR about.php Could not accept a data connection: Operation timed out.

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >