Search Results

Search found 56909 results on 2277 pages for 'error messages for'.

Page 636/2277 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Anti-spam measures for websites

    - by acidzombie24
    What are anti-spam measure I should consider before launching my user content website? Some things I have considered: Silent JavaScript based CAPTCHA on the register page (I do not have an implementation) Validate emails by forcing a confirmation link/number Allow X amount of comments per 10 minutes and Y per 2 hours (I am considering excited first time users who want to experience the site) Disallow link until user is trusted (I am not sure how a user will become trusted) Run all comments, messages, etc. through a spam filter. Check to see if messages are duplicate or similar (I may not bother with this. I'd like the system to be strong without this) I also timestamp everything which I then can retrieve as a long on my administrator page. What other measures can I take or consider?

    Read the article

  • Install VLC 2.0.7 in CentOS 6.4?

    - by raaz
    I am keep failing in the installation process I have tried. I have started process as follows. yum install gcc dbus-glib-devel* lua-devel* libcddb wget http://download.videolan.org/pub/videolan/vlc/2.0.7/vlc-2.0.7.tar.xz tar -xf vlc-2.0.7.tar.xz && cd vlc-2.0.7 ./configure in the configure I am getting the error as follows configure: WARNING: No package 'libcddb' found: CDDB access disabled. checking for Linux DVB version 5... yes checking for DVBPSI... no checking gme/gme.h usability... no checking gme/gme.h presence... no checking for gme/gme.h... no checking for SID... no configure: WARNING: No package 'libsidplay2' found (required for sid). checking for OGG... no configure: WARNING: Library ogg >= 1.0 needed for ogg was not found checking for MUX_OGG... no configure: WARNING: Library ogg >= 1.0 needed for mux_ogg was not found checking for SHOUT... no configure: WARNING: Library shout >= 2.1 needed for shout was not found checking ebml/EbmlVersion.h usability... no checking ebml/EbmlVersion.h presence... no checking for ebml/EbmlVersion.h... no checking for LIBMODPLUG... no configure: WARNING: No package 'libmodplug' found No package 'libmodplug' found. checking mpc/mpcdec.h usability... no checking mpc/mpcdec.h presence... no checking for mpc/mpcdec.h... no checking mpcdec/mpcdec.h usability... no checking mpcdec/mpcdec.h presence... no checking for mpcdec/mpcdec.h... no checking for libcrystalhd/libcrystalhd_if.h... no checking mad.h usability... no checking mad.h presence... no checking for mad.h... no configure: error: Could not find libmad on your system: you may get it from http://www.underbit.com/products/mad/. Alternatively you can use --disable-mad to disable the mad plugin. [root@localhost vlc-2.0.7]# So I went to libmad http location and downloaded it and while doing make it gave me the errors.There are no errors at ./configure with libmad but make not going through. [root@localhost libmad-0.15.0b]# make (sed -e '1s|.*|/*|' -e '1b' -e '$s|.*| */|' -e '$b' \ -e 's/^.*/ *&/' ./COPYRIGHT; echo; \ echo "# ifdef __cplusplus"; \ echo 'extern "C" {'; \ echo "# endif"; echo; \ if [ ".-DFPM_INTEL" != "." ]; then \ echo ".-DFPM_INTEL" | sed -e 's|^\.-D|# define |'; echo; \ fi; \ sed -ne 's/^# *define *\(HAVE_.*_ASM\).*/# define \1/p' \ config.h; echo; \ sed -ne 's/^# *define *OPT_\(SPEED\|ACCURACY\).*/# define OPT_\1/p' \ config.h; echo; \ sed -ne 's/^# *define *\(SIZEOF_.*\)/# define \1/p' \ config.h; echo; \ for header in version.h fixed.h bit.h timer.h stream.h frame.h synth.h decoder.h; do \ echo; \ sed -n -f ./mad.h.sed ./$header; \ done; echo; \ echo "# ifdef __cplusplus"; \ echo '}'; \ echo "# endif") >mad.h make all-recursive make[1]: Entering directory `/home/raja/Downloads/libmad-0.15.0b' make[2]: Entering directory `/home/raja/Downloads/libmad-0.15.0b' if /bin/sh ./libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DFPM_INTEL -DASO_ZEROCHECK -Wall -march=i486 -g -O -fforce-mem -fforce-addr -fthread-jumps -fcse-follow-jumps -fcse-skip-blocks -fexpensive-optimizations -fregmove -fschedule-insns2 -fstrength-reduce -MT version.lo -MD -MP -MF ".deps/version.Tpo" \ -c -o version.lo `test -f 'version.c' || echo './'`version.c; \ then mv -f ".deps/version.Tpo" ".deps/version.Plo"; \ else rm -f ".deps/version.Tpo"; exit 1; \ fi mkdir .libs gcc -DHAVE_CONFIG_H -I. -I. -I. -DFPM_INTEL -DASO_ZEROCHECK -Wall -march=i486 -g -O -fforce-mem -fforce-addr -fthread-jumps -fcse-follow-jumps -fcse-skip-blocks -fexpensive-optimizations -fregmove -fschedule-insns2 -fstrength-reduce -MT version.lo -MD -MP -MF .deps/version.Tpo -c version.c -fPIC -DPIC -o .libs/version.lo cc1: error: unrecognized command line option "-fforce-mem" make[2]: *** [version.lo] Error 1 make[2]: Leaving directory `/home/raja/Downloads/libmad-0.15.0b' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/raja/Downloads/libmad-0.15.0b' make: *** [all] Error 2 how can i resolve the issue and install VLC in my Centos ? I am using CentOS 6.4 . Thank you.

    Read the article

  • Failed to import SLES11 to SCOM2012

    - by siyang
    I try to the import my SLES11 to SCOM 2012, but failed. Below is my simple steps and failed error message. Import SLES11 mp DNS resolve both SCOM and SLES11 Install SCOM Agent and cert. Generated the cert from SLES11 then import it to SCOM, and use scxcertconfig -sign to refresh the cert, and back it to SLES11.(I restart the scxadmin on SLES11, and reboot the SCOM ) 5. On SCOM try to find the SLES11, but failed. Below is the error message. *Message: Certificate signing was not successful. Detials: Standard Output: Failed to start child process '/etc/init.d/scx-cimd' errno=13RETURN CODE:1 Standard Error: Exception Message:*

    Read the article

  • 7zip: how to extract to std output?

    - by Jason S
    I have 7z 4.65 and am trying to extract a single file to standard output. The 7z command-line help says -so is the command-line parameter to extract to standard output, but when I try this: >>> 7z e -so dist\dlogpkg.jar META-INF/MANIFEST.MF 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Error: I won't write data and program's messages to same terminal how can I fix this? There doesn't seem to be a command line param to suppress the normal 7z stdout messages. (edit: the equivalent operation in "unzip" would be unzip -p dist\dlogpkg.jar META-INF/MANIFEST.MF which works fine. But I'd like to use 7z for various reasons.)

    Read the article

  • Can't create a registry key under Eventlog and I am in administrators group

    - by Tony_Henrich
    I am troubleshooting an installer problem where it's giving an error writing to a registry key. So when I use the Registry Editor (regedit) to create the same key under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog, I get a "Cannot create key: Error writing to the registry". Looking at the permissions, administrators have full access and I am a member of that group. I can create entries in other areas in the registry. When I try to take ownership, I see my name already listed. But then I get an error "Unable to set new owner on Eventlog. Insufficient system resources exist to complete the requested service". I tried after a new reboot. I turned off my firewall (Comodo). Why can't I create a new key when I am an admin and permissions indicate I have full control?

    Read the article

  • how to make run cron on OSX 10.6.2?

    - by Radek
    Note: this question is not about how to edit cron tab but how to make cron work I edited my cron using env EDITOR=joe crontab -e I entered 1 * * * * echo 'test' > /Users/radek/Backup/rationalvmware/test.txt and it does nothing although the cron is set up correctly. Checked via Cronnix and viewed the cron in /var/cron/tabs. Editing crontab using Cronnix gives me the same results. If I run echo 'test' > /Users/radek/Backup/rationalvmware/test.txt manually it creates a files as expected so I assume that the command I provide to cron is correct one. Is there anything special I have to do to make cron work on OSX? How can I check it the the cron is running. What's the equivalent of /var/log/messages on OSX? I can see in messages on SuSE that cron works.

    Read the article

  • Problems with vim/locale as non-root user on Solaris

    - by Lyle
    I do some work on a Solaris 10 machine, and my .vimrc is set up to show unicode characters for tabs and line endings: set listchars=tab:?\ ,eol:¬ This works out of the box on my OS X machine. On Linux as well as Solaris I get the following error when I start vim: Error detected while processing /home/lhanson/.vimrc: line 17: E474: Invalid argument: listchars=tab:?~V?\ ,eol:¬ I solved this on my Linux box by setting LANG=en_US.utf8 ('locale -a' shows this as being an option). On Solaris, however, 'locale -a' shows the following: C POSIX iso_8859_1 Setting LANG to C or POSIX yields the same error, and even though iso_8859_1 probably wouldn't work it doesn't successfully change the locale anyway. As a non-root user, is there any way I can have my unicode characters show up?

    Read the article

  • Exim queue in WHM

    - by Xobb
    Hi fellas, I've got the centos server with WHM. The mail server is exim. I need exim put all messages in queue and not sending directly.Though I've added the queue_only option to exim configuration and the messages are collected in the queue now. Afterwards I've found out that someone is calling exim -q to process the queue every once in a while. I've found the following cron job: 0 6 * * * /scripts/exim_tidydb > /dev/null 2>&1 which I beleive has been used to process the exim queue. Also I suspect that script was installed alongside with WHM. Surely I've commented it out and was expecting everything to work just fine. But that didn't happen. I still get the exim queue processed once in a while. Am I missing anything? What may cause my exim queue to process? Here is cat /etc/exim.conf | grep queue queue_only deliver_queue_load_max = 3 Thanks

    Read the article

  • PostgreSQL won't start anymore

    - by Sander Declerck
    Today, my PostgreSQL doesn't start anymore on my windows machine... I've tried to start the service in windows services and got the following error: Windows could not start the PostgreSQL Database Server 8.3 service on Local Computer. Error 1053: The service did not respond to the start or control request in a timely fashion. Then I went to the command line to manually start C:/Program Files (x86)/PostgreSQL/8.3/bin/psql.exe, and then I got this error: psql: Could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "???" and accepting TCP/IP connections on port 5432? Edit: I found this in the logs: 2011-04-22 13:13:16 CEST LOG: could not receive data from client: No connection could be made because the target machine actively refused it. 2011-04-22 13:13:16 CEST LOG: unexpected EOF on client connection

    Read the article

  • What is the harm in giving developers read access to application server application event logs?

    - by Jim Anderson
    I am a developer working on an ASP.NET application. The application writes logging messages to the Windows event log - a custom application log just for this application. However, I do not have any access to testing or staging web/application servers. I thought an admin could just give me read access to this event log to help in debugging problems (currently a service that is working in dev is not working in test environment and I have no idea why) but that is against my client's (I'm a consultant) policy. I feel silly to keep asking an admin to look at the event log for me. What is the harm in giving developers read access to application server application event logs? Is there a different method of application logging that sysadmins prefer programmers use? Surely, admins don't want to be fetching logging messages for developers all the time.

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • "Request not supported" in IPCONFIG (WinXP SP3)

    - by pablog
    In a customer PC (Windows XP SP3), suddenly the network went down: the network adapter appears with an error mark. I replaced the network card, but the new one does the same thing. When I enter IPCONFIG, XP shows this error (in standard and safe mode): Internal error occurred Request not supported Unable to query host name If I start the system with a boot cd the PC runs fine, so the problem seems to be in the XP installation. I tried: uninstalling and reinstalling the network card in the Device Manager disabling and reenabling the card netsh int ip reset netsh winsock reset catalog and a couple of "reset" programs (WinsockxpFix.exe, etc) with no luck. Is there any way to fix it without reinstalling XP? TIA, Pablo

    Read the article

  • How do I get Bugzilla to authenticate with Active Directory LDAP?

    - by user65712
    After reading this guide and trying a ton of permutations based on that, is there an easy way to get Bugzilla working with an AD server? I keep getting the error: 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db0 I created an AD "bugzilla" user account with "Account Operators" permission as directed. I'm not sure if the error is saying that my login is incorrect or the system login to access LDAP is incorrect. Maybe I just missed an arcane option somewhere in the settings. You'd think all I'd need to do is specify the server name. As you might have been able to tell, I don't have a lot of LDAP experience. Also, will the Sysinternals LDAP tool help here?

    Read the article

  • Why is Mac OS X 10.6 using /usr/lib to start Apache when I compiled PHP using /opt/local/lib?

    - by Anthony
    PHP 5.3.3 compiled on Mac OS X 10.6 - using /usr/lib when trying to start Apache... rather than /opt/local/lib specified when PHP was configured. Why is it trying to load from /usr/lib when I specified in my configure not to? httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/libphp5.so into server: dlopen(/usr/libexec/apache2/libphp5.so, 10): Library not loaded: /opt/local/lib/libiconv.2.dylib\n Referenced from: /usr/libexec/apache2/libphp5.so\n Reason: Incompatible library version: libphp5.so requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 The error message above refers to /opt/local/lib which when I run: otool -LD /opt/local/lib/libiconv.2.dylib Message: /opt/local/lib/libiconv.2.dylib: /opt/local/lib/libiconv.2.dylib (compatibility version 8.0.0, current version 8.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.0.0) It shows that the version is different than what httpd is erring out as. I have a feeling I need to recompile Apache using newer libraries, but the error message still doesn't make too much sense to me.

    Read the article

  • Installing VMware ESXi 4.0 on Dell 1950: Cannot open vmkboot.gz

    - by rlandster
    I am trying to install VMware ESXi 4.0.0 U1 on a Dell 1950 server via a bootable CD-ROM. I keep getting this error right at the start: Cannot open vmkboot.gz I checked that the CD-ROM drive is not to blame by installing Debian Etch using that drive. I tried several different versions of ESXi (3.5, 3.5 Dell edition, 4.0 Dell edition) and they all give me an error at the same place. I also tried installing from a USB "thumb" drive but got the same error. I checked with the VMware HCL (Hardware Compatibility List) and the Dell 1950 is listed as being compatible. Here are some server details: Two 1.6 GHz Xeon 5110 CPUs (ID: 06-0F-B) BIOS version 2.2.6 Any ideas on what might be the issue?

    Read the article

  • Cannot connect puppet agent to puppet master

    - by u123
    I have installed puppet 3.3.1 on a debian 7 machine (test-puppet-master) and the puppet agent on another debian 7 machine (test-puppet-agent/192.11.80.246) acting as a client. I start the master with: puppet master --verbose --no-daemonize And I start the agent with: puppet agent --server=test-puppet-master --no-daemonize --verbose Notice: Did not receive certificate which gives the following output on the master: Notice: Starting Puppet master version 3.3.1 Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Inserting default '~ ^/catalog/([^/]+)$' (auth true) ACL Info: Inserting default '~ ^/node/([^/]+)$' (auth true) ACL Info: Inserting default '/file' (auth ) ACL Info: Inserting default '/certificate_revocation_list/ca' (auth true) ACL Info: Inserting default '~ ^/report/([^/]+)$' (auth true) ACL Info: Inserting default '/certificate/ca' (auth any) ACL Info: Inserting default '/certificate/' (auth any) ACL Info: Inserting default '/certificate_request' (auth any) ACL Info: Inserting default '/status' (auth true) ACL Info: Not Found: Could not find certificate test-puppet-agent Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Not Found: Could not find certificate test-puppet-agent Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Not Found: Could not find certificate test-puppet-agent Any ideas why the agent cannot connect?

    Read the article

  • Installing List Compenent on Sharepoint Server

    - by Tom
    I added the Sharepoint site to the 'Document Management' section in CRM with the List Components checked and it added it with no problem. Also when I navigate to the 'Documents' section under an account it shows up with the format of the List components. However, if i click on 'New' or 'Actions' I get the following error message: An Error has occured in the script on this page. Error: Access is denied URL: https://*serveraddress*/crmgrid/scripts/crmmenu.htc Do you want to continue running scripts on this page? I have ran the power script which added the MIME .htc extention to IIS. Does anyone know what might be wrong?

    Read the article

  • shibboleth: tomcat failing to start IdP listener

    - by HorusKol
    I have installed a Shibboleth Identity Provider as per http://www.edugate.ie/workshop-guides/shibboleth-2-identity-provider-installation-linux-debian-or-ubuntu However, testing only gave me a 404 from Tomcat, and when I checked the Tomcat logs, I saw that the IdP listener was not starting: 10/01/2011 11:25:31 AM org.apache.catalina.startup.HostConfig deployDescriptor INFO: Deploying configuration descriptor idp.xml 10/01/2011 11:25:32 AM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart 10/01/2011 11:25:32 AM org.apache.catalina.core.StandardContext start SEVERE: Context [/idp] startup failed due to previous errors The IdP descriptor file has the following context: <Context docBase="/opt/shibboleth-idp/war/idp.war" privileged="true" antiResourceLocking="false" antiJARLocking="false" unpackWAR="true" /> I have confirmed that the WAR file is located as the Context above specifies - as I have found similar issues from other people where the WAR file was not found. However, the logs posted by those people indicate that the descriptor file was correctly read by Tomcat and their problem was with the WAR file itself. I'm assuming this is some kind of syntax error with the idp.xml, but cannot determine what it might be. Also - setting the Tomcat logging level to FINEST does not provide any additional information in the logs for this error.

    Read the article

  • How do I give MacPorts privileges?

    - by cojadate
    I tried to install PostgreSQL server development libraries using MacPorts and got the following: Warning: MacPorts running without privileges. You may be unable to complete certain actions (e.g. install). ---> Computing dependencies for postgresql-server-devel ---> Dependencies to be installed: postgresql-devel ---> Building postgresql-devel Error: Target org.macports.build returned: shell command failed Error: The following dependencies failed to build: postgresql-devel Error: Status 1 encountered during processing. To report a bug, see <http://guide.macports.org/#project.tickets> So I guess that means I need to running MacPorts with privileges and try again. Unfortunately I've no idea how to give MacPorts privileges. I'm running OS X 10.6.3

    Read the article

  • iptables rule to submit packets matching a specific negative rule

    - by Aditya Sehgal
    I am using netfilter_queue to pick up certain packets from the kernel and do some processing on them. To, the netfilter queue, I need all packets from a particular source except UDP packets with src port 2152 & dst port 2152. I try to add the iptable rule as iptables -A OUTPUT ! s 192.168.0.3 ! -p udp ! --sport 2905 ! --dport 2905 -j NFQUEUE --queue-num 0 iptables throw up an error of Invalid Argument. Querying dmesg, I see the following error print ip_tables: udp match: only valid for protocol 17 I have tried the following variation with the same error thrown. iptables -A OUTPUT ! s 192.168.0.3 ! -p udp --sport 2905 --dport 2905 -j NFQUEUE --queue-num 0 Can you please advise on the correct usage of the iptables command for my case.

    Read the article

  • Installing GNU scientific library and linking to programme

    - by jack
    I am trying to install a statistical program which requires GNU Scientific Library (GSL). I have successfully installed GSL through the yum command, but my statistical program gives an error when I try to run make install. I think there is a linking problem. How can I solve it? $ sudo yum install gsl.x86_64 Installed: gsl.x86_64 0:1.15-3.fc16 Dependency Installed: atlas.x86_64 0:3.8.4-1.fc16 $ tar -xvzf prog.tgz $ cd prog $ make $ gcc -O3 -Wall -Wshadow -pedantic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DVER32 -I/opt/local/include/ -L/opt/local/lib/ -c -o prog.o prog.c In file included from prog.c:16:0: prog.h:7:30: fatal error: gsl/gsl_sf_gamma.h: No such file or directory compilation terminated. make: *** [prog.o] Error 1

    Read the article

  • I cannot install flash player, I am getting 1603 exit code

    - by Naz
    I am trying to install flash player silently, using a powershell script. I do not think it is being installed. I looked under "control panel-uninstall programs" I don't see flash player listed there. Also, I am printing the exit code for the process and it prints 1603 exit code, which is "fatal error during installation" As an experiment, I double click on the flash player .msi file, and it gave me 1722 error " Error 1722.There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. "

    Read the article

  • Windows Malicious Software Removal Tool log says it can't do all required actions. Should I be conce

    - by Tom
    Here's what the log file c:/Windows/debug/mrt.log of my Windows 7 install says: WARNING: Security policy doesn't allow for all actions MSRT may require. ->Scan ERROR: resource process://pid:6080 (code 0x00000005 (5)) ->Scan ERROR: resource process://pid:5300 (code 0x00000057 (87)) ->Scan ERROR: resource process://pid:3512 (code 0x00000057 (87)) I use the default setup. I didn't change anything. This is the first time I checked the log file and this warning is in there from the start. Can I do something about it? Or I shouldn't be concerned, because it can do everything what's necessary anyway? Do you have this warning in your logfile?

    Read the article

  • Disable public Tomcat6 stack trace

    - by The NinjaSysadmin
    Can anyone advise me how I can go about disabling Tomcat6 from displaying stacktrace output to the browser? Tomcat: 6.0.29 I have made the following changes to /opt/apache-tomcat-6.0.29/conf/web.xml <error-page> <exception-type>java.lang.Throwable</exception-type> <location>/error.jsp</location> </error-page> I'm told putting this in place will give a white screen if the file doesn't exist, however I'm getting stack traces to the screen.

    Read the article

  • How do I remove Slony from a restored PostgreSQL database?

    - by Scott Herbert
    I've restored a database which came from a server on which Slony was running. The server on which the database has been restored does not have Slony installed. When the database restored, there were a lot of errors reported, with Slony related objects not getting created due to Slony related logins being missing. This I thought was not a problem, as losing the Slony objects didn't seem to matter, and infact seemed desirable. However, now I've got an anoying, if not critical problem. Whenever one clicks on a table in the newly restored DB in PGAdmin, a Slony related error popup ... pops up. The first one reads: "An error has occured: ERROR: function _rmscl.getlocalnodeid(unknown) does not exist" I notice that under the Replication node in PGAdmin, that there is a Slony replication cluster. Trying to drop this cluster results in more object missing type errors. Does anyone have any ideas how we can remove the last vestiges of Slony from this database?

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >