Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 600/692 | < Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >

  • Understanding !d command in sed with respect to saves

    - by richardh
    I have a directory of tab-delimited text files and some have comments in the first few lines that I would like to delete. I know that the first good line starts with "Mark" so I can use /^Mark/,$!d to delete these comments. After this deletion I have several other replacements that I make in the (new) first line that has variable names. My question is, why do I have to save sed's output to get my script to work? I understand that if I line is deleted, then the output doesn't proceed downstream because there is no output. But if I don't delete (i.e., !d) then why do I have to save to file? Thanks! Here is my shell script. (I'm a sed newbie, so any other feedback is also appreciated.) #!/bin/bash for file in *.txt; do mv $file $file.old1 sed -e '/^Mark/,$!d' $file.old1 > $file.old2 sed -e '1s/\([Ss]\)hareholder/\1hrhldr/g'\ -e '1s/\([Ii]\)mmediate/\1mmdt/g'\ -e '1s/\([Nn]\)umber/\1o/g'\ -e '1s/\([Cc]\)ompany/\1o/g'\ -e '1s/\([Ii]\)nformation/\1nfo/g'\ -e '1s/\([Pp]\)ercentage/\1ct/g'\ -e '1s/\([Dd]\)omestic/\1om/g'\ -e '1s/\([Gg]\)lobal/\1lbl/g'\ -e '1s/\([Cc]\)ountry/\1ntry/g'\ -e '1s/\([Ss]\)ource/\1rc/g'\ -e '1s/\([Oo]\)wnership/\1wnrshp/g'\ -e '1s/\([Uu]\)ltimate/\1ltmt/g'\ -e '1s/\([Ii]\)ncorporation/\1ncorp/g'\ -e '1s/\([Tt]\)otal/\1ot/g'\ -e '1s/\([Dd]\)irect/\1ir/g'\ $file.old2 > $file rm -f $file.old* done

    Read the article

  • Logging won't stop on log file after renaming/moving it.... how do I stop it?

    - by Jakobud
    Just discovered that logrotate is not rotating our firewall log. So its up to 12g in size. I need to split up the file into smaller chunks and start manually rotating them so I can get things back on track. However before I start splitting the firewall up, I need to stop the firewall from logging to the current firewall log file and force it to start logging to a new empty file. This way I'm not trying to split up or rotate a log file that is still constantly growing. I tried to simply do this: mv firewall firewall.old touch firewall I expected to see the new empty firewall file to start growing in size, but no... the firewall.old is still be logged to. Then I tried to start/stop iptables. No change. firewall.old is still the log file. I tried to move it to another directory. That didn't help. I tried to stop iptables, then change the filename and create a new firewall file and then start iptables again, but no change. How do I stop the logging on this file and force it to start logging on a new file?

    Read the article

  • Missing access log for virtual host on Plesk

    - by Cummander Checkov
    For some reason i don't understand, after creating a new virtual host / domain in Plesk a few months back, i cannot seem to find the access log. I noticed this when running /usr/local/psa/admin/sbin/statistics The host in question is being scanned Main HTML page is 'awstats.<hostname_masked>-http.html'. Create/Update database for config "/opt/psa/etc/awstats/awstats.<hostname_masked>.com-https.conf" by AWStats version 6.95 (build 1.943) From data in log file "-"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Jumped lines in file: 0 Parsed lines in file: 0 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 0 new qualified records. So basically no access logs have been parsed/found. I then went on to check if i could find the log myself. I looked in /var/www/vhosts/<hostname_masked>.com/statistics/logs but all i find is error_log Does anybody know what is wrong here and perhaps how i could fix this? Note: in the <hostname_masked>.com/conf/ folder i keep a custom vhost.conf file, which however contains only some rewrite conditions plus a directory statement that contains php_admin_flag and php_admin_value settings. None of them are related to logging though.

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

  • What should be monitored to troubleshoot file sharing problems?

    - by RyanW
    I'm running into some problems with a file share used by an ASP.NET web application. With this configuration, there are 2 web servers (win2k8 web) that connect to a file server (win2k8 enterprise), reading and writing files using a file share. Recently, one of the web servers has begun encountering an error accessing the file share: IOException: The specified network name is no longer available. There does not appear to be much info on the web for explaining what's causing this and how to best fix it, so I'm looking at what I can monitor in order to get clues. I'm not sure if it's hardware, just a load issue, file size, frequency, etc. With Windows perfmon, what can I monitor on the File Server side? There's the "Files Open" object, any other good ones? What can I monitor on the web server side? EDIT: I'll add that the UNC path uses the IP address of the file server, not a name to resolve. Also the share is a single, flat directory with over 100K files.

    Read the article

  • EventID 1058 Code 5, Sysvol is subdir of Sysvol - how to fix?

    - by nulliusinverba
    I have been trying to resolve this error, like many others: The processing of Group Policy failed. Windows attempted to read the file \domain.local\SysVol\domain.local\Policies{3EF90CE1-6908-44EC-A750-F0BA70548600}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following: a) Name Resolution/Network Connectivity to the current domain controller. b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller). c) The Distributed File System (DFS) client has been disabled. Error code: 5 = Access Denied. The incredibly helpful post is this one (http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/2003_Server/A_1073-Diagnosing-and-repairing-Events-1030-and-1058.html). Quoting from this post: HERE IS A LIST OF POTENTIAL PROBLEMS THAT CAN LEAD TO 1030 AND 1058 EVENT ERRORS: --Sometimes the permissions of the file folders that contain Group policies (the Sysvol folder) can be corrupted. --Sometimes you have problems with NetBIOS: --Sometimes the GPO itself is corrupt, or you have a partial set of data for that GPO. --Sometimes you may have problems with File Replication Services, which almost always indicates a problem with DNS --Sysvol may be a subfolder of itself: Sysvol/Sysvol I have the problem listed where sysvol is a subfolder of sysvol. The directory structure is: -sysvol -domain -staging -staging areas -sysvol (shared as "\\server\sysvol") -domain.local -ClientAgent -Policies -scripts Interestingly, the second sysvol folder is the one that is shared as "\server\sysvol". This makes me confident this is the issue with the permissions and error code 5. Also interestingly, my server 2008 R2 servers can see it fine - my server 2008 servers cannot, and get the error. This is consistent across all my servers. This latter fact makes me uncertain what I need to do to fix this up. Do I, e.g., simply move the shared sysvol folder up a level to replace the non-shared one? Any help greatly appreciated. Cheers, Tim.

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Node.js Build failed: -> task failed (error#2)?

    - by Richard Hedges
    I'm trying to install Node.js on my CentOS server. I run ./configure and it runs perfectly fine. I then run the 'make' command and it produces the following: [5/38] libv8.a: deps/v8/SConstruct - out/Release/libv8.a /usr/local/bin/python "/root/node/tools/scons/scons.py" -j 1 -C "/root/node/out/Release/" -Y "/root/node/deps/v8" visibility=default mode=release arch=ia32 toolchain=gcc library=static snapshot=on scons: Reading SConscript files ... ImportError: No module named bz2: File "/root/node/deps/v8/SConstruct", line 37: import js2c, utils File "/root/node/deps/v8/tools/js2c.py", line 36: import bz2 Waf: Leaving directory `/root/node/out' Build failed: - task failed (err #2): {task: libv8.a SConstruct - libv8.a} make: * [program] Error 1 I've done some searching on Google but I can't seem to find anything to help. Most of what I've found is for Cygwin anyway, and I'm on CentOS 4.9. Like I said, the ./configure went through perfectly fine with no errors, so there's nothing there that I can see. EDIT I've got a little further. Now I just need to upgrade G++ to version 4 (or higher). I tried yum update gcc but no luck, so I tried yum install gcc44, which resulted in no luck either. Has anyone got any ideas as to how I can update G++?

    Read the article

  • Find and Replace String in filenames

    - by shekhar
    I have thousands of files with no specific extensions. What I need to do is to search for a sting in filename and replace with other string and further search for second string and replace with any other string and so on. I.e.: I have multiple strings to replace with other multiple strings. It may be like: "abc" in filename replaced with "def" * String "abc" may be in many files "jkl" in filename replaced with "srt" * String "jkl" may be in many files "pqr" in filename replaced with "xyz" * String "pqr" may be in many files I am currently using excel macro to get the file names in excel and then preserving original names in one column and replacing desired in the content copied in other column. then I create a batch file for the same. Like: rename Path\OriginalName1 NewName1 rename Path\OriginalName2 NewName2 Problem with the above procedure is that it takes a lot of time as the files are many. And As I am using excel 2003 there is limitation on number of rows as well. I need a script in batch like: replacestr abc with def replacestr pqr with xyz in a single directory. Will it be better to do in unix script?

    Read the article

  • How to avoid sshfs freezing?

    - by Andreas Hagen
    So the issue is this: I've installed sshfs on Ubuntu 12.04 and I'm trying to connect to a couple of remote servers. So initially the mount seams successful. Sometimes Gnome even picks it up and displays the "new device found" box at the bottom of the screen. but from here on there is not much that works. Or at least not any more. The first couple of times i connected it seamed to work fine, and I was able to transfer some files, then i disconnected using fusermount -u <folder> and after reconnecting a little later the trouble started. Now after executing sshfs -o ServerAliveInterval=15 -o reconnect -C -o workaround=all -o idmap=user root@<host>:/ <folder>, when I change directory into the mount-point, the shell just freezes. Strangely ls -al <folder> works when listing just the root of the remote system, but nothing more. Also every file-explorer I've tried freezes just like cd <folder>. To me it seamed like there was some kind of zombie thread or something hanging around my system, due to the fact that it did work the first time, so I have tried rebooting but no luck. sshfs -V gives this: SSHFS version 2.3 FUSE library version: 2.8.6 fusermount version: 2.8.6 using FUSE kernel interface version 7.12 So yea, any ideas?

    Read the article

  • Office365 Exchange: Cannot open shared two calendars in Outlook

    - by Mark Williams
    The problem: Outlook won't open the calendars on another user's mailbox and and a room mailbox, even when users have permission. Note: This problem is affecting more than one account on more than one machine. So I have a room mailbox and a personal mailbox on Exchange, both with shared calendars. There is a security group called "Scheduling Users" that have editor rights on both of these calenders. The room mailbox was created using PowerShell, per the instructions posted online (http://help.outlook.com/140/ee441202.aspx). Sharing worked on both of these folders initially. Users can still access these folders using OWA. So on to the problem. When users try to open these calendars in Outlook they receive one of the following messages. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. Cannot open this item. Cannot open the free/busy information. The attempt to log on to Microsoft Exchange has failed. What I have tried so far: Resetting the permissions on both of the mailboxes. I deleted the security group permissions on both mailboxes, applied the change, then waited a bit and gave the permissions back. Deleted the OST file of the shared calendar from the Outlook data directory That is all I have been able to find online. Any thoughts? I have been going back and forth with the Office365 support folks for a while and they seem stumped too.

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

  • Instructions to setup primary and only domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a hw network adapter to linksys and a virtual network adapter to DC DC machine has two network adapters: one to linksys (to be internet connected so it can be updated etc.) and one to host (my dev Win7 machine) Edit My development machine should access domain controller and logon using domain credentials. Development machine would access internet directly via Linksys router. My domain controller machine would only serve authentication (and if I'm able to configure it right) should also have Active Directory Federation Services in a workable condition. I hope this is a bit more clear now. At least a small bit.

    Read the article

  • How to set umask globally?

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally (and for all users), so that the X11 desktop environment gets the memo as well? (The system is Linux Mint / Ubuntu, in case that changes anything...)

    Read the article

  • Can not copy files after installing windows

    - by Ali
    I am experiencing a weird problem. I was running Xubuntu on my laptop until yesterday that I had to delete Xubuntu and install Windows. I had a NTFS partition on my Xubuntu that I kept some files on it. Today after installing windows I wanted to move all the files from that partition to an external HDD. I selected all files and folders and clicked on Copy, then I went to the HDD and clicked on paste but nothing happened. I can not do that. I do not know why. I copy the files, and wherever I click paste, nothing happens. If I try to copy the files and folders one by one, I can copy some of them, but some of them do not move. The other problem I have is that I can not open some files, in particular pdf files. When I click on pdf files I get this error: There was an error opening this document. This file cannot be found. Also, I cannot play some mp4 files. I can not open some jpg and txt files. I get this error The directory name is invalid. So in summary, after removing Xubuntu and installing windows 7 I have the following problems with one of the NTFS partitions on my internal drive: Can not copy or cut all folders and files from that partition to any other partition - I also do not get any errors. Can copy some folders and files Can not access some pdf, jpeg, txt and mp4 files and get the above errors. I should also mention I did not change anything for this partition during the installation or formatting the other partitions.

    Read the article

  • setting up tracd behind mod_proxy?

    - by FilmJ
    I'm having trouble setting up mod_proxy and tracd. Seems almost all the search results for this problem take me to the built-in trac documentation page that mentions it as an option. I have several VirtualServers already running on the box in question, so running tracd on port 80 or 443 is not an option, but I do want to make my trac server accessible on this machine without exposing an additional port via the firewall. Making things even more complicated is that I have multiple trac repositories being served by the same instance of tracd, and so I want to set it up so: http://trac.abc.com is proxy'd to localhost:8000/projects/abcproject, and http://trac.def.com is proxy'd to localhost:8000/projects/defproject. Currently, the setup I have below results in 100% 403 errors. The server is running as www-data and the directory where all trac files are stored is owned by www-data, AND tracd (as show below) is running as www-data, so not sure where it's getting hung up. The relevant configuration on /var/apache2/sites-enabled/trac.abc.com: ProxyPass / http://localhost:8000/abcproject ProxyPassReverse / http://localhost:8000/abcproject The relevant configuration on /var/apache2/sites-enabled/trac.def.com: ProxyPass / http://localhost:8000/defproject ProxyPassReverse / http://localhost:8000/defproject The command used to instantiate tracd: tracd -a defproject,/var/www/vhosts/trac-common/users.htdigest,DEFProject -a abcproject,/var/www/vhosts/trac-common/users.htdigest,ABCProject -p 8000 -b localhost -e /var/www/vhosts/trac-common/projects If I access the site at http://localhost:8000/ everything works fine, but if I try to access via any of the proxy'd hosts I end up with 403 at every turn. I've used mod_proxy successfully as described above for other servers, such as couchdb, so maybe this has to do with the headers sent by tracd??

    Read the article

  • On Windows 2008 R2, how do I back up DHCP if the DHCP .mdb database is always busy?

    - by johnny
    I get this from my backup software. C:\WINDOWS\system32\dhcp\dhcp.mdb : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50tmp.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\tmp.edb : The process cannot access the file because it is being used by another process. My questions: Should I be doing a manual backup of DHCP via command line tools or maybe with MMC, Action, Backup before I run my backup? Is the %SystemRoot%\System32\DHCP\Backup directory always kept up to date? (which does get backed up by backup software) I'm answering my own question but the registry key is set up for 3c, 60 minutes, I believe. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\BackupInterva This is not the included backup software for Windows. It is another product, but I have seen this with every backup software I've ever used.

    Read the article

  • SFTP, ChrootDirectory and multiple users

    - by mdo
    I need a setup where I can put the contents of several user folders to a DMZ server from where external clients can download it, protocol SFTP, Linux, OpenSSH. To ease administration we want to use one single user for the upload. What does work is to define ChrootDirectory /home/sftp/ in sshd_config, set the according ownership and modes and define a home dir in passwd so that the working directory of the user fits. This is my structure: /home/sftp/uploader/user1/file1.txt /user2/file2.txt The uploader user can write file1.txt and file2.txt to the corresponding folders and by having the user folders (user1, user2) set to the users' primary group + setting SETGUID on the folders the users are able to even delete the files (which is necessary). Only problem: because /home/sftp/ is the chroot base dir the users can change updir and see other users' folders, though not being able to change into because of access rights. Requirement: We want to prevent users to change to /home/sftp/uploader/ and see other users' folders. My requirements are to use SFTP, have one upload user and every user must have write access to his home dir. Obviously it's not an option to use something like ChrootDirectory %h because every path component of the chroot path needs to have limited access rights, so as far as I understand this does not work.

    Read the article

  • poor performance when deleteing many files

    - by choppy
    I've got two machines: The first is IBM Blade with 24 cores 96GB RAM and single local hard drive with 278GB divided to 4 partitions: 1. c: - 40GB; 3GB free 2. d: - 40GB; 37GB free 3. e: - 198322GB; 198.1 free 4. 100MB (EFI system Partition) Formatted with GPT The other is pizza server with 4 cores 8GB RAM and single local hard drive with 273GB divided to 3 partitions: 1. c: - 136.81; 20GB free 2. d: - 88.74GB; 87.91 free 3. e: - 47.85GB; 46.91 free Formatted with MBR I have two scripts, the first creates 20,000 files in one directory, each file size is 192KB, the second delete the folder (recursive) and prints how much time it toke to delete all files. The problem is on the first server (blade) it takes about 2 minutes to delete all 20,000 files while on the second (pizza) it takes about 4 seconds!? Both servers have clean windows server 2008R2 with no special application running on background. Any ideas what is going on?

    Read the article

  • Java website on Tomcat PHP website on Apache - how to get PHP web pages into Java web pages?

    - by Venkat
    We have a Java web application deployed on Tomcat. We also setup Apache and mod_proxy_ajp to route web requests (port 80/443) to Tomcat. We would like to deploy a PHP application on the same Apache server - probably under a subdirectory (/var/www/ourapp). Now we would like to access & display web pages from PHP application within web pages generated by Java application. Planning to implement Single Sign-on as well. Example: Web page from java has (JQuery Tabs) and we like to display the PHP web page within a tab while all other HTML comes from java application. Can you please give a overall picture of how to proceed about this? Mainly 1. how we should install/setup our PHP application on same Apache server which is used to route web requests to Tomcat? i.e. either setup sub domain or install in sub directory 2. How to bring PHP pages into present web pages (generated by java). Can we use AJAX requests or should go for Java PHP Bridge/ Querces such applications? Thank you for your time in advance. Regards.

    Read the article

  • Execute encrypted files but don't let anybody read them.

    - by Stebi
    I want to provide a virtual machine image with an installed web application. The user should be able to boot the vm (don't login, just boot) and a webserver should start automatically. The point is I want to hide the (ruby) source code of the web application from everyone as there is no obfuscator for ruby. I thought I could use file system encryption to encrypt the directory with the sourcecode (or even a whole partition). But the webserver user must be able to read it automatically after booting. Nobody is allowed to login as the webserver user (or any other user) so no other can read the contents. My questions are now: Is this possible? Because I give away the whole vm everybody could mount its virtual discs and read them (except the encrypted one). Is it now possible to find the key the webserver user needs to decrypt the files and decrypt them manually? Or is it safe to give such a vm away? The problem is that everything needed to decrypt must be included somewhere in the vm else the webserver cannot start automatically. Maybe I'm completely wrong and you have another tip for me securing the source code.

    Read the article

  • The Story of secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry if the story is boring and messy, but most of it is real! =) /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry for this boring and messy story! /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • CMD/ADB - Autorun script to search, copy, and paste a file from android system to flash drive

    - by Outride
    I've looked around and can't find anything that answers my question. This is my first question, so any tips or thoughts are welcome, as well as an answer :p As explained in title, i want to create a script that launches, finds a file on android phone, copies it, and pastes it to a flash drive. As of right now, it's a mix of multiple tutorials, trial and error, and I'm at a point of giving up. As of right now, I have a flash drive, loaded with three scripts. As follows: Bold = name of file file.bat @echo off :: variables /min SET odrive=%odrive:~0,2% set backupcmd=xcopy /s /c /d /e /h /i /r /y echo off %backupcmd% "C:\Users\Outride\Desktop\kikDatabase.db" "%drive%\all" @echo off cls invisible.vbs CreateObject("Wscript.Shell").Run """" & WScript.Arguments(0) & """", 0, False launch.bat wscript.exe \invisible.vbs file.bat So far, I had to use android commander, manually go through the directory, find /data/data/kik.android/databases and then copy kikDatabase.db to my desktop. Then run this scrip. Yes i'm trying to pull the database to copy all my email contacts. I use launch.bat, which then makes file.bat invisible due to the invisible.vbs script. What would i need to do now to have the file searched for and copied to the flashdrive? Thanks in advance, i'll be glad to answer any questions if theres any :p just remember that i'm not exactly a tech expert haha EDIT* Cleared junk of prior edits. New - I now have a .bat script to recognize what drive the usb is on, and launch py_cmd (adb shell) This is the current script. pull.bat @echo off :: variables SET odrive=%odrive:~0,2% set launching=start "%drive%\Minimal ADB and Fastboot\py_cmd" echo off %launching% so how could I make it for the .bat or a new script, to type the following "adb pull /data/data/kik.android/databases/ %drive%\All\Database" into the adb terminal? please help! I've been racking my brain over this all night :3

    Read the article

  • Can't get DNS Alias work on Ubuntu 10.04 with Apache 2

    - by Johnny
    I want to use the DNS Alias to configure one of my domain pointing to a specific directory on the server. Here is what I've done: Change the IP address in domain setting, and it works $ ping www.example.com PING example.com (124.205.62.xxx): 56 data bytes 64 bytes from 124.205.62.xxx: icmp_seq=0 ttl=48 time=53.088 ms 64 bytes from 124.205.62.xxx: icmp_seq=1 ttl=48 time=52.125 ms ^C --- example.com ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 52.125/52.606/53.088/0.482 ms Add sites-available and sites-enabled $ ls -l /etc/apache2/sites-available/ total 16 -rw-r--r-- 1 root root 948 2010-04-14 03:27 default -rw-r--r-- 1 root root 7467 2010-04-14 03:27 default-ssl -rw-r--r-- 1 root root 365 2010-06-09 18:27 example.com $ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 26 2010-06-09 15:46 000-default -> ../sites-available/default lrwxrwxrwx 1 root root 33 2010-06-09 18:17 001-example.com -> ../sites-available/example.com But it doesn't work and when I open the browser for www.example.com, it shows an 111 error: The following error was encountered: Connection to 124.205.62.48 Failed The system returned: (111) Connection refused Here is how example.com's config: $ cat /etc/apache2/sites-enabled/001-example.com <virtualhost *:80> DocumentRoot "/vhosts/example.com/htdocs/" ServerName www.example.com ServerAlias example.com <Location /> Order Deny,Allow Deny from None Allow from all </Location> #Include /etc/phpmyadmin/apache.conf ErrorLog /vhosts/example.com/logs/error.log CustomLog /vhosts/example.com/logs/access.log combined Could you please tell me how to solve this?

    Read the article

< Previous Page | 596 597 598 599 600 601 602 603 604 605 606 607  | Next Page >