Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 497/692 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Best client and server antivirus for 5 user office?

    - by drpcken
    I'm setting up an Active Directory environment for 5 users (very small) and I'm wondering what is the best antivirus for clients (Windows 7) and servers (Server 2008 R2 x64)? I use Symantec Corp at my organization (50+ users) but I think that is overkill for this company. I wanted to use Microsoft Security Essentials for the clients (I use it for home machines and it's the best free AV in my opinion) but I don't think it will work on the Servers (3 servers, PDC, TS, and File). They are behind a Sonicwall TZ 200. What would be the best. Free would be even better. Thank you!

    Read the article

  • How do I fix this error? Windows server 2003 the application failed to initialize properly (0xc0000022)

    - by Sharon
    Opening one of the programs from the user desktop I get the above Aplication error. It is a proram stored on the server and then the icon put on the users desktop (this is how I was told to do it) but it does not run the application. I don't know anything about group policies etc and can just about manage to add users in the active directory and that is it. We just have a folder which we drop the program icons into. Any ideas? I must be doing something wrong as it doesn't always show up on their desktop either. What is the simplest way to do this? Thanks

    Read the article

  • Nginx HTTPS redirects causing loop

    - by Ben Chiappetta
    I've been banging my head against the wall trying to figure this out, so if anyone can help I'd appreciate it. My Nginx conf has three different redirect loops, haven't been able to get any of the three to work right. The three problem areas are: Redirecting memcache directory to SSL Redirecting accounts directory to SSL Redirecting SSL to www if non-www nginx.conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log notice; sendfile on; #tcp_nopush on; keepalive_timeout 65; proxy_set_header X-Url-Scheme $scheme; #gzip on; rewrite_log on; include /etc/nginx/conf.d/*.conf; } conf.d/default.conf: server { listen 80; server_name <redacted>.net; rewrite ^(.*) http://www.<redacted>.net$1; } server { listen 80; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; #charset koi8-r; access_log /var/log/nginx/host.access.log main; root /var/www/html; index index.php index.html index.htm; location =/memcache { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } location /accounts { rewrite ^/(.*)$ https://$server_name$request_uri? permanent; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; try_files $uri = 404; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } conf.d/ssl.conf: # HTTPS server # server { listen 443; server_name <redacted>.net; rewrite ^(.*) https://www.<redacted>.net$1; } server { listen 443 default_server ssl; server_name www.<redacted>.net; set_real_ip_from 192.168.30.4; set_real_ip_from 192.168.30.5; set_real_ip_from 192.168.30.10; real_ip_header X-Forwarded-For; proxy_set_header X-Forwarded_Proto https; proxy_set_header Host $host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_set_header X-Forwarded-Ssl on; set $https_enabled on; ssl_certificate <redacted>.crt; ssl_certificate_key <redacted>.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; root /var/www/html; index index.php index.html index.htm; location /memcache { auth_basic "Restricted"; auth_basic_user_file $document_root/memcache/.htpasswd; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; include /etc/nginx/fastcgi_params; try_files $uri = 404; } }

    Read the article

  • Best practice to create an ftp administrator account on vsftpd

    - by jtd
    Background: My manager would like me to create an administration account for out FTP server. When logged in via ftp, it should instantly display all of the home directories of the users, and be able to modify any directory or file in any way possible. What would be the best way to go about this? I planned on chrooting this ftp admin to /home, but I don't know how to properly go about the permissions. Maybe make a group called ftp_admins, and chgrp the /home folder? But then wouldn't it affect the users accessing their folders? any help is appreciated.

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Having problems with Grub2 booting Ubuntu from my External Hard Drive

    - by anonymous
    I installed Ubuntu on my external hard drive but it won't boot on my laptop. what do i do? i did some reading and traced the source of the problem to grub2. Apparently, grub2 doesn't use the device's UUID, and uses the linux directory instead (/dev/sdf2). This means that whenever i plug my E-HDD into a system that has a different number of drives connected to it, i won't be able to boot without editting the boot command. I don't understand it too well but that's what i got from what i read. Is there anyway to fix this?

    Read the article

  • SIMPLEST way to set up password protection for a static site, with basic admin UI?

    - by Joseph Turian
    I have a static site. I would like the simplest approach to password protecting a directory, with a basic admin UI for adding/removing users. I will have so few users that I don't care about performance. I don't care if it's PHP or Django or whatever, I just want a complete software package. Apache basic auth isn't good, because you can't log out. Nor is there a UI for adding users. I tried throwing everything behind Django auth and serving the files through Django. However, Chrome treats all my text/css headers as text/plain, so I don't get any stylesheets showing. I can't use mod_xsendfile on my server because I can't reconfigure Apache to add new modules. I think this approach is overkill anyway. I can try configuring Nginx's X-Accel-Redirect, however that requires implementing all the Django code for auth myself, and I'd prefer an existing solution. However, this is my backup plan. Is there a code package that implements authentication with basic admin for a static site?

    Read the article

  • Expression web ftp: Stuck at "Listing subsites"

    - by FrankPython
    When I try to use the Expression Web 4 built-in ftp I see the message ""Listing subsites in.." and soon afterward "passive ftp not available". If I switch to active, I get "active ftp not available". There are no subsites. It is a simple directory with one html page. Backend is a normal IIS6 server. FTP to the same IP with other FTP clients works fine! Any idea if Expression web has some specific requirements? It is our own dedicated server. (Please no tips to use another tool, for this specific project Expression Web is a requirement).

    Read the article

  • How to Install WebLogic 12c ZIP on Linux

    - by Bruno.Borges
    I knew that WebLogic had this small ZIP distribution, of only 184M, but what I didn't know was that it is so easy to install it on Linux machines, specially for development purposes, that I thought I had to blog about it. You may want to check this blog, where I found the missing part on this how to, but I'm blogging this again because I wanted to put it in a simpler way, straight to the point. And if you are looking for a how to for Mac, check Arun Gupta's post.  So, here's the step-by-step: 1 - Download the ZIP distribution (don't worry if your system is x86_64)Don't forget to accept the OTN Free Developer License Agreement! 2 - Choose where to install your WebLogic server and your domain, and set as your MW_HOME environment variableI will use /opt/middleware/weblogic for this how to export MW_HOME=/opt/middleware/weblogicMake sure this path exists in your system. 'mydomains' will be used to keep your WebLogic domain. mkdir -p $MW_HOME/mydomain 3 - If you don't have your JAVA_HOME environment variable still configured, do it. Point it to where your JDK is installed. export JAVA_HOME=/usr/lib/jvm/default-java 4 - Unzip the downloaded file into MW_HOME unzip wls1211_dev.zip -d $MW_HOME 5 - Go to that directory and run configure.sh cd $MW_HOME ./configure.sh 6 - Call the setEnvs.sh script . $MW_HOME/wlserver/server/bin/setWLSEnv.sh7 - Create your development domain. It will ask you for username and password. I like to use weblogic / welcome1cd $MW_HOME/mydomain $JAVA_HOME/bin/java $JAVA_OPTIONS -Xmx1024m \ -Dweblogic.management.allowPasswordEcho=true weblogic.Server8 - Start WebLogic and access its web console(sh startWebLogic.sh &); sleep 10; firefox http://localhost:7001/consoleUsually, it takes only 10 seconds to start a domain, and 5 more to deploy the Administration Console (on my laptop). :-)Enjoy!

    Read the article

  • Client side certificates in client browsers with unix server for management

    - by user146253
    We are currently running Unix dedicated servers for everything (Web cluster, database, FTP, batch, ...) except for a Microsoft Active Directory Certificate Services. The sole purpose of this Windows box is to provide client side certificates to our clients browsers. All our clients are required to install a client side certificate on order for them to be able to access our website. Is there an alternative in the Unix space? The purpose is to make sure only the approved hardware of an approved client can access our website. I'm open for any solution that provides me with this level of security. We are however talking about thousands of certified computers just so you can factor that in in a proposed solution. Optionally we would also like to be able to revoke access. With Regards.

    Read the article

  • Sharing accounts between multiple computers running Ubuntu Linux

    - by john
    My school has a computer lab full of machines running Red Hat Linux. They have it set up so that you can log into any computer in the lab, and it automatically loads your desktop, home directory, etc, which makes it so all computers in the lab look the same to you, regardless or which one you're using. I have two computer at home running Ubuntu Linux. Could I do this same thing with my computers at home? What's it called, and how do I find documentation on how to set it up? Thanks!

    Read the article

  • Packing up files on my machine, sending it to a server, and unpacking it

    - by MxyL
    I am implementing a feature in my application that sends all files in a specified folder to a server. I have the basic FTP transaction set up using Apache Commons FTPClient: it sets up a connection and transfers a file from one place to another. So I can simply loop over the directory and use this connection to transfer all the files. However, this could be better. Rather than transferring each file one by one, it makes more sense to pack it up in a compressed archive and then send the whole file at once. Saves time and bandwidth, since these are just text files so they compress nicely. So I would like to add automatic archive packing and unpacking. This is the workflow I have planned out, using zip compression: Zip all files in the folder Send the file over Unzip the files at its destination 1 and 2 are easy since the files are on the local machine, but I'm not sure how to accomplish the last step, when the files are now on a remote server. What are my options? I have control over what I can put and run on the server. Perhaps it is not necessary to do the packing/unpacking myself?

    Read the article

  • Tomcat / Railo stop responding with no error output

    - by andrewdixon
    This is going to sound very vague and I'm sure it will be voted down for not giving enough information however I don't really have any to give as you will see. We have an AWS instance running Amazon Linux, Apache, Tomcat and Railo and from time to time the Tomcat/Railo simply stops responding to requests and there are no errors output in the catalina.out file or any of the other log files in the Tomcat logs directory. When I issue the command to restart Tomcat/Railo the restart scripts sits there for a while then says that Tomcat has not responded so it has killed it off and then it starts up again and everything is fine until it happens again, anything from a couple of minutes to a couple of days later. I have done my best to check other logs on the server but have found no messages at all to indicate why Tomcat/Railo has given up and stopped responding. Can anyone suggest any reason why it might be doing this and / or any other log file(s) that we could check to see what is happening. Thanks. Andrew.

    Read the article

  • Distributing my Application inside a Debian Virtual Machine Image-- How to meet GPL obligations?

    - by bdk
    I have a Linux application I've developed, and I have created a standalone VMWare Image that people can download to try out the application without needing to install and configure a Linux Server. I created this VMWare Image by starting with a base Debian system, installing a bunch of packages and then configuring all the packages and daemons my application depends on. Upon load, the VMWare Image boots right into an X Server running only my application and no Window manager, so its more of a "Virtual Appliance" than a normal Linux Desktop environment. Users generally will never see a command prompt or any application other than my own. (My application itself I have a handle on the licensing issues of) Now I would like to distribute this image, but I'm not sure how to meet my GPL (and other licenses the various Debian components are released under) Obligations. As I understand it, I have two primary obligations to meet. Providing Copyright and License Information for each component I use. As I understand it, all the information I am required to present is located in the /usr/share directory in the Debian, but since my users generally will never touch a console or terminal, they will never see this. Does providing a text file containing a concatenation of all the files inside /usr/share meet this obligation Making source code available for all components I distribute. Since I am not creating the image from source, but from binary packages, I can't provide the actual source code that results in exactly my image being generated. Does providing an ftp mirror and an offer to send that mirror on DVDs of the Debian source debs for all the packages I use meet this obligation? Anything Else I'm required to do to legally distribute this image?

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • Is it possible to push DNS search suffices from DNS server to client?

    - by Mark
    Our (active directory, windows-server-based) intranet used to be called "intranet", and DNS worked fine for windows machines and iPads/Android devices. We have changed it to be "apps.intranet", and it still works for windows machines, but no longer for iPads/Android devices. I think this is because out windows clients are configured to append .company.com when searching DNS, to make it a fully qualified lookup (this search suffix list is pushed to the PCs via AD group policies). I must admit, though, I don't know why it worked with just "intranet"! Does anyone know if it's possible to get DNS to "tell" the iPads/Android devices to append .company.com ... or how we can make it work some other way (but still using the multi-label, non-qualified DNS names) ? Thanks!

    Read the article

  • SSH and Latent Connections (e.g., satellite connections)

    - by user71494
    Most of the week I live in the city where I have a typical broadband connection, but most weekends I'm out of town and only have access to a satellite connection. Trying to work over SSH on a satellite connection, while possible, is hardly desirable due to the high latency ( 1 second). My question is this: Is there any software that will do something like buffering keystrokes on my local machine before they're sent over SSH to help make the lag on individual keystrokes a little bit more transparent? Essentially I'm looking for something that would reduce the effects of the high latency for everything except for commands (e.g., opening files, changing to a new directory, etc.). I've already discovered that vim can open remote files locally and rewrite them remotely, but, while this is a huge help, it is not quite what I'm looking for since it only works when editing files, and requires opening a connection every time a read/write occurs. (For anyone who may not know how to do this and is curious, just use this command: 'vim scp://host/file/path/here)

    Read the article

  • Mac failing (failed?) hard drive - is all hope lost?

    - by Daniel
    It's a 500 GB Seagate laptop hard drive that came with my Macbook Pro. Apple partition format. Already replaced and now have it external, connected via SATA/USB adapter. Trying to get just a few files that I worked on while out of town when it crashed (and thus did not have my time machine backup drive). Drive will not mount, but OS X Disk Utility detects it and can read the capacity, model number, and even the name of the partition, which leads me to believe all hope may not be lost. Failed attempts so far: Disk Utility verify+repair says drive cannot be repaired and that I should back up immediately (lovely) Disk Warrior says it cannot rebuild the directory due to hardware failure Data Rescue quick & deep scans immediately failed PhotoRec says "error reading sector" for every sector (at least for the few minutes I let it run before closing it to explore other options) What else can I try here? Again, I'm just looking for a few, small files (python scripts to be specific) - not a full recovery.

    Read the article

  • Ubuntu is not detecting my android device

    - by user3514160
    I am new to android. I have just downloaded and installed android sdk. Now when I run the application from eclipse, my device is not getting detected. I have googled and was brought up with this as my solution, but that also didn't worked. Here's the 51-android.rules SUBSYSTEMS=="usb", ATTR{idProduct}=="0bb4", ATTR{idProduct}=="0c03", MODE="0666", GROUP="plugindev", OWNER="<username>" After that I rebooted my laptop, and ran this command: username@laptopname:~/Android/adt-bundle/sdk/platform-tools$ adb devices The output i get is: * daemon not running. starting it now on port 5037 * * daemon started successfully * List of devices attached ???????????? no permissions EDIT crazydeveloper@crazydeveloper:~$ lsusb Bus 002 Device 004: ID 0bb4:0c03 HTC (High Tech Computer Corp.) Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f2:b337 Chicony Electronics Co., Ltd Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub crazydeveloper@crazydeveloper:~$ ls -l /dev/bus/usb/004/ ls: cannot access /dev/bus/usb/004/: No such file or directory crazydeveloper@crazydeveloper:~$ Edit: 2 After the answer submitted here's the output that i got: crazydeveloper@crazydeveloper:~$ ls -l /dev/bus/usb/002 total 0 crw-rw-r-- 1 root root 189, 128 May 7 09:45 001 crw-rw-r--+ 1 root root 189, 129 May 7 09:45 002 crw-rw-rw- 1 root plugdev 189, 130 May 7 09:48 003 I am using Micromax Canvas 2.2 A114 - Android Version 4.2.2 Please help me. Thanks.

    Read the article

  • correct format for datetime appended to filename

    - by jhayes
    I'm trying to setup a batch file to execute a set of stored procs and dump the output to a timestamped text file. I'm having problems finding the correct format for the timestamp. Here is what I'm using osql.exe -S <server> -E -Q "EXEC <stored procedure> " -o "c:\filename_%date:~-0,10%_%time:~-0,10%.txt" The error I get is: Cannot open output file - x:\filename_Thu 06/25/_16:26:43.1.txt No such file or directory I can't find the documentation and I've played around with it but can't find the correct format.

    Read the article

  • Do I need to recompile PHP to make use of CURL API?

    - by amn
    I have both Apache and PHP set up manually, albeit the latter without CURL. There is this jungle of instructions and explanations on extensions for PHP. I have a very straightforward question - what do I need to do to enable CURL in a more dynamic way. I resent the idea of static linking, in fact I hate and avoid static linking like the plague. Is it possible to have my Apache and PHP understand that there is CURL in town? I can compile CURL if necessary. Package management may be out of the question, because I built PHP myself - I am on Ubuntu, and it does not provide PHP without Suhosin and a a whole lot of time, so I removed it and built PHP myself. The whole slew of related questions simply propse installing "php5-curl" package, which is exactly one thing I CANNOT do since it installs it in a completely unrelated directory, which my PHP does not even seem to bother linking to.

    Read the article

  • Upgraded to Ubuntu 12.04 from 10.04 and have to transfer database from Postgresql 8.4 to 9.1

    - by Stpn
    I upgraded server with a Rails application to Ubuntu 12.04 from 10.04 and cannot connect to Postgresql database now... Here is the error message from Rails app: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432" Also the pg_ctl start is not recognized as a command.. EDIT: Turns out my database in on Postgresl 8.4 and my sever is now running on 9.1. So all the database files / configs are on 8.4.. How can I transfer them? Just straight copy from old pg_hba.conf?

    Read the article

  • How can I automatically convert all source code files in a folder (recursively) to a single PDF with syntax highlighting?

    - by Bentley4
    I would like to convert source code of a few projects to one printable file to save on a usb and print out easily later. How can I do that? Edit First off I want to clarify that I only want to print the non-hidden files and directories(so no contents of .git e.g.). To get a list of all non-hidden files in non-hidden directories in the current directory you can run the find . -type f ! -regex ".*/\..*" ! -name ".*" command as seen as the answer in this thread. As suggested in that same thread I tried making a pdf file of the files by using the command find . -type f ! -regex ".*/\..*" ! -name ".*" ! -empty -print0 | xargs -0 a2ps -1 --delegate no -P pdf but unfortunately the resulting pdf file is a complete mess.

    Read the article

  • Can a working Tomcat 6 webapp be turned into a usable .war file?

    - by Bill Cole
    Problem: I have a working webapp on a FreeBSD 8.1 Tomcat 6 test server that I need to move to a production system. The developer who last touched it (and had root on that server) has moved on and isn't helpful. The running app seems to have been deployed from a CVS server that is now unavailable. My thinking is that I would like to find a way to wrap the working webapp into a proper .war so that I can deploy it on a pristine host and (after testing) send the existing system to a very deep bitbucket. But I'm not having luck finding a way to do that. I'm a sysadmin not a developer and don't work much with Tomcat systems so I may be (likely am) overlooking something blindingly simple. I gather that I may be able to just tar up the deployed directory and untar it on the new machine, but I have a nagging feeling that there are pitfalls in that.

    Read the article

  • How do I add permissions via command line for "everyone" on external HDD

    - by acidzombie24
    I have an external HDD and I kind of messed up the file permissions but when fixing it I thought it is ok bc with my username I can access the files perfectly fine. Now that I use this with two PC (actually ATM I don't have access to my other PC) I can't access these files. The problem is this directory has hundreds of folders with no permission for "everyone". I would like to give it the default permissions including have all access for the user "everyone". How do I do that via command line for these hundreds of folders?

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >