Search Results

Search found 55010 results on 2201 pages for 'system security'.

Page 150/2201 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Install system-wide PEAR on Debian Lenny

    - by artvolk
    Good day! I've installed PEAR on Debian Lenny using apt-get install php-pear, it was installed in /usr/share/php When I try to install anything using pear install <package> the PEAR folder is created under current user home directory and separate copy of pear is installed there. I ended up by installing local copy of PEAR for one of the users like this: http://kuziel.info/log/archives/2006/04/01/Installation-of-local-PEAR-repository Is any way to tell pear to install packages to system-wide repository in /usr/share/php? What is the recommended way of using system-wide PEAR copy? Thanks in advance!

    Read the article

  • Strict security and virtual host isolation with Nginx?

    - by Hach-Que
    I currently have an Apache web server set up under which each virtual host is isolated using HTTPD-ITK and the AppArmor module. Each virtual host's workers are setuid/setgid by the server and are then placed in an AppArmor profile. I'm looking to use Nginx but I can't find any documentation on setting it up so that rather than the worker processes being shared between all virtual hosts, worker processes are per virtual host (and thus can be setuid / setgid). Is there any way to do this under Nginx?

    Read the article

  • Does pointing *.[int].mydomain.com to 192.168.1.[int] constitute a security threat

    - by Dave
    For testing purposes, I've found it's really useful to point whatever.machineIP.mydomain.com to 192.168.1.machineIP : that way we can test each other's code without fidgetting with hosts files. I'm aware that this identifies our local IP addresses to the outside world, but if someone could access the network, it'd be trivial to sniff which of the local IP addresses respond to port 80 anyway. Is there anything I'm not seeing? Credit for the idea: http://news.ycombinator.com/item?id=1168896

    Read the article

  • linux ssh -X graphical applications will not start when system load is high

    - by Chrisv
    So I am using ssh -X to access a server. I am at a Xubuntu desktop accessing a Ubuntu server that is in the next room. Usually everything works fine, but when the system load gets high, any graphical applications I have freeze and fail to be restarted. This happens even if the process that is causing the high load has been niced to a low priority with "nice -n 19". And even though the system load is high, the command line works fine with no delay, and other applications I have running on the server (e.g. virtual machines) run fine. But any graphical application running through X dies. When the graphical applications fail they usually give out an error message that suggests a time-out. It seems that something connected to X has a low priority and times out. But what is it, and how does one fix it?

    Read the article

  • 64bit or 32bit Linux system?

    - by Milan Babuškov
    I have a server that has 4GB of RAM. On it, I have installation of 32bit Slackware Linux 12.1. Of course, it is not using all of 4GB of RAM. I'd soon like to increase the RAM to 8GB, and am looking for a way for the system to use it. The system is used as a database server and is under high load during the day. AFAICT, I have two options: stay with 32bit and rebuild the kernel and lose some performance. Or go with 64bit and reinstall everything. Looking at 64bit versions of Slackware, I could run -current or Slamd64. Now, on to the questions: Should I stay with 32bit or go with 64bit? If I go 64bit, should I use -current or Slamd64? P.S. I hope to get answers from someone actually using any of these configurations in production, not just copy/paste something I could find myself via Google.

    Read the article

  • RemoteApp Security Warning

    - by nairware
    I have a Windows 2012 Standard x64 RemoteApps RDWeb portal where I can launch apps. We have one remote app in particular which is RDP (mstsc.exe). Whenever a user launches it, they receive three different prompts--the second one is this alert (shown below). How can I get rid of this alert? I have other RemoteApps launching as well, and they do not throw errors or alerts like this one. And they are applications with the .exe extension, so I do not understand what is so unique about the RDP RemoteApp that would cause this alert. One thing perhaps worth mentioning is this particular RDP remote app points directly to the mstsc.exe executable residing on a particular session host/terminal server (as shown in the "From" value of the warning). As such, a gateway server would not be used to load-balance and choose the RDP client launched from a session host at random. This RDP RemoteApp is explicitly associated with one particular terminal server.

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • How do I recover from a Linux CentOS 4.6 Operating System Crash

    - by Greg Omebije
    Our x86 Linux server running CentOS4.6 has crashed. The machine boots only to the Grub prompt. We have tried using the "rescue mode" to recover the System, but it hasn't worked. How can we fix this problem, so that the machine boots normally? How can we fix this problem to the point were we can recover our files from the server Our Linux Server Configuration: Dell PowerEdge 1950 Intel Xeon 2 HDD (146GB each) 4GB RAM Hardware and Software raid setup CentOS 4.6 We used Sysrecord to boot the computer: the following are the output of fdisk -l Disk /dev/sda: 293.3 GB, 292326211584 255 heads, 63 sectors/track, 35539 Cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17769 142625070 8e Linux LVM

    Read the article

  • Log application changes made to the system

    - by Maxim Veksler
    Hello, Windows 7, 64bit. I have an application which I don't trust but still need to run. I would like to run the installer of this application and later on the installed executable under some kind of "strace" for windows which will record what this application did to the system. Mainly: What files have been created / edited? What registery changed have been made? To what network hosts did the application tried to communicate? Ideally I would also be able to generate a "UNDO" action to undo all the changes. Please don't suggest full Virtualization solutions such as Virtualbox, VMWare and co. because the application should run in the host system (A "sandbox" approach will OTHO be accepted, IMHO). Do you any such utility I can use? Thank you, Maxim.

    Read the article

  • Linux's best filesystem to work with 10000's of files without overloading the system I/O

    - by mhambra
    Hi all. It is known that certain AMD64 Linuxes are subject of being unresponsive under heavy disk I/O (see Gentoo forums: AMD64 system slow/unresponsive during disk access (Part 2)), unfortunately have such one. I want to put /var/tmp/portage and /usr/portage trees to a separate partition, but what FS to choose for it? Requirements: * for journaling, performance is preffered over safe data read/write operations * optimized to read/write 10000 of small files Candidates: * ext2 without any journaling * BtrFS In Phoronix tests, BtrFS had demonstrated a good random access performance (fat better than XFS thereby it may be less CPU-aggressive). However, unpacking operation seems to be faster with XFS there, but it was tested that unpacking kernel tree to XFS makes my system to react slower for 51% disregard of any renice'd processes and/or schedulers. Why no ReiserFS? Google'd this (q: reiserfs ext2 cpu): 1 Apr 2006 ... Surprisingly, the ReiserFS and the XFS used significantly more CPU to remove file tree (86% and 65%) when other FS used about 15% (Ext3 and ... Is it same now?

    Read the article

  • Proxification rulte for System process

    - by kseen
    I'm trying to configure Microsoft Visual Studio 2010 remote debugging and ran into issue: while connecting to remote computer running MSVSMON, client computer sends SYN request for connection. It makes it under the System process (as I see it in TCPView). As every network apps should be configured to use proxy in our network, I'm trying to add devenv.exe to proxification rules to make its traffic goes thru LAN's proxy server. It doesn't help. So my question is how can I make that low-level-system traffic will go through local area network proxy server?

    Read the article

  • Cannot find my hard disk while installing linux-“No root file system defined” error

    - by Syam Kumar S
    I am trying install Linux on my computer (tried Ubuntu 10.4 and Linux Mint 9). I started the installation wizard and on the hard disk selection page the hard disk is not displayed. I have a 500GB disk with 5 partitions and windows 7 ultimate in one partition. If I click the forward button, it shows an error- "No root file system defined". I have tried to install by booting from CD and pendrive but both shows the same error. When I load Linux as live CD it doesn't show the hard disk. My hard disk works fine in windows 7. System config: intel i3 2100, 500GB hdd, 2GB ram

    Read the article

  • File system loop detected in /var/named/chroot/var/named/

    - by Iko
    The problem start with a message No space left on device. After investigating a little (with google's help) I found : find: File system loop detected; /var/named/chroot/var/named' is part of the same file system loop as/var/named'. What I don't know is what to do next. I found this on centos.org : and see if the inode numbers are the same (they shouldn't be). If they are then you need to remove the /var/named/chroot/var/named/ hard link and recreate it as a directory the inode number are the same but I don't know exactly which folder to delete and what to do next thank you for any help Linux xxxxx.onlinehome-server.info 2.6.32-220.13.1.el6.x86_64 #1 SMP Tue Apr 17 23:56:34 BST 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • System-install-packages missing in RedHat Enterprise Linux 5

    - by Kumar P
    i am using RedHat Enterprise Linux 5.1. When i use add-remove software in application menu, i think, that i wrongly uninstalled something. so after reboot that menu item missing. Also i can't use system-install-packages in terminal. When i double click rpm package it open as archive. And i specifically open it as software installer by open with other application, it saying error /usr/bin/system-install-packages missing. Help me to solve this problem ...

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Understanding Security Certificates (and thier pricing)

    - by John Robertson
    I work at a very small company so certificate costs need to be absolutely minimal. However for some applications we do Need to have our customers get that warm fuzzy not-using-a-self-signed certificate feeling. Since creating a "certificate authority" with makecert really just means creating a public/private key pair, it seems pretty clear that creating a public/private key pair FROM such a "certificate authority" really just means generating a second public/private key pair and signing both with the private key that belongs to the "certificate authority". Since the keys are signed anyone can verify they came from the certificate authority I created, or if verisign gave me the pair they sign it with one of their own private keys, and anyone can use verisigns corresponding public key to confirm verisign as the source of the keys. Given this I don't understand when I go to verisign or godaddy why they have rates only for yearly plans, when all I really want from them is a single public/private key pair signed with one of their private keys (so that anyone else can use their public keys to confirm that, yes, they gave me that public/private key pair and they confirmed I was who I said I was so you can trust my public/private key pair as belonging to a legitimate third party). Clearly I am misunderstanding something, what is it? Does verisign retire their public/private key pairs periodically so that my verisign signed key pair "expires" and I need new ones? Edit: I learned that the certificate has an internal expiration date and it also maintains an internal value stating whether it can be used to sign other certificates (i.e. sign other private/public key pairs stored as certificates). Can't I get a few (even one) non-signing certificate signed by someone like verisign that I can use for authentication/encryption without a yearly subscription?

    Read the article

  • CentOS 6 - Make system aware of custom lib paths and missing base links

    - by Mike Purcell
    I am trying to compile libmemcached (1.0.7) on CentOS6, and keep getting the following warning: ... checking for event.h... no configure: WARNING: Unable to find libevent ... I manually compiled libevent (2.0.19) and built it using the following configure line: OPTIONS="--prefix=/usr/local/_custom/app/libevent" Everything compiled and installed fine, but I couldn't figure out how to make the system aware that the lib files are in the custom /usr/local/_custom/app/libevent/libdir. I stumbled upon an article and read that I can make the system aware of custom lib paths by adding a custom file to /etc/ld.so.conf.d/ directory: # /etc/ld.so.conf.d/customApp.conf /usr/local/_custom/app/libevent/lib Then I issued the ldconfig command and was able to confirm that libevent was included by issuing this command: ldconfig -p | ack -i libevent Seeing that libevent was now included in the ldconfig output, I figured I would be able to compile libmemcached and satisfy the aforementioned warning. Unfortunately it did not. So I took another look at the ldconfig output and noticed this: libevent_pthreads-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_pthreads-2.0.so.5 libevent_openssl-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_openssl-2.0.so.5 libevent_extra-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_extra-2.0.so.5 libevent_core-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent_core-2.0.so.5 libevent-2.0.so.5 (libc6,x86-64) => /usr/local/_custom/app/libevent/lib/libevent-2.0.so.5 There are no references to the base links, for example, I would expect to see links to these (ls -la /usr/local/_custom/app/libevent/lib): libevent.so -> libevent-2.0.so.5.1.7 libevent_openssl.so -> libevent_openssl-2.0.so.5.1.7 libevent_core.so -> libevent_core-2.0.so.5.1.7 So either I am doing something wrong, or the system still does not know where to look to find libevent.so. -- Update #1 -- I wasn't able to get libmemcached to compile without the warning notice, even after trying to compile using the following configure command: ./configure --prefix=/usr/local/_custom/app/libmemcached CFLAGS="-I/usr/local/_custom/app/libevent/include" LDFLAGS="-L/usr/local/_custom/app/libevent/lib" I thought for sure this would work because I am directly passing the include and lib directories to the configure command. But it did not.

    Read the article

  • SOHO Netflix and network security

    - by TW
    I want to use WIFI for HiDef video, but I don't trust it for my office PC's. I've heard of VLANs but I have no idea how to set it up or what (SOHO) hardware to buy. Other than getting 2 different DSL lines, how can I be absolutely sure that the PC side doesn't get hacked? What if I want to use MS Home server as a backup device for both sides? Can I make it "read only" for the PC side, and physically change the cable if I need to restore? TW

    Read the article

  • Ubuntu issues when moving hard disk to new system

    - by Tim
    I'm working on a legacy project with a small single board computer running Ubuntu 10.04 on a compact flash card. I need to be able to save away a working image (via dd) and copy said image to other compact flash cards for use in other single board computers (with identical hardware) I'm able to copy the image to other flash cards and bootup on other systems no problem. But I'm seeing strange behavior. For instance, I can't use sudo on the new system (“sudo: must be setuid root”). I've gone down the path of trying to fix this, but have run into a slew of other issues. General question is: what do I need to be aware of when moving a hard disk containing Ubuntu (in my case a compact flash card) to another computer? I was hoping it would be seamless to Ubuntu since it's moving to a system with identical hardware. Is there something that needs to be done to make it "portable"?

    Read the article

  • RTF File Opens as Read Only from Document Management System (Does not happen for all users)

    - by Dave
    We have a third party system in place that as one part of its duties hosts RTF files that a user can open, make changes to, and save back into the document management system. Recently we have begun upgrading users to Office 2007 from 2003. We are now hearing that when some users open these documents, they open as Read Only (even though there is no document protection in place and the files are set for Unrestricted Access). Other users though, who also have Word 2007, report no problems. There were no problems for anyone when Word 2003 was being used. I'm sure it's a setting in Word but I'm having a lot of difficulty in identifying where the issue could be. Looking for any assistance on why these RTF files are opening as Read Only for some and not for others when using Word 2007. Thanks! Dave

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • Restoring the owners on debian system files

    - by Vlad
    Due to my inattention, tiredness (and probably stupidity) i've run chown -R someuser:someuser / and now all your base are belongs to us the files on the server belong to one user (lol). After system restart apache, bind9, mysql, and a dozen of other applications don't start and fill their log files with permission errors. I haven't done any backups on system files, only on the db and website files... Please suggest some ways to revive my web server. I have only 2 month experience with linux, so please keep it simple...

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >