Search Results

Search found 21348 results on 854 pages for 'active directory lds'.

Page 316/854 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Crontab -e gives me error messages

    - by DNA
    I get a bunch of error messages when I run crontab -e Here are the error messages. And here is my crontab file under `/usr/bin/': # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 30 * * * * root rsync /home/dnaneet/Downloads/*.pdf /home/dnaneet/Downloads/pdfs/ # I notice that the last task ('rsync') NEVER RUNS! Why is this happening? What did I do wrong? Running Ubuntu 11.10/Bash. I have read this... Am I missing a shebang? And I don't know if my anacron jobs run. Edit 1 In light of Masi's comment, I commented out lines 17 thru 25 of my crontab file with #. Now when I run sudo crontab -e, all I get is: /usr/bin/crontab: 11: 17: not found /usr/bin/crontab: 12: 25: not found (gedit:4301): Gtk-WARNING **: Attempting to store changes into `/root/.local/share/recently-used.xbel', but failed: Failed to create file '/root/.local/share/recently-used.xbel.GOHVBW': No such file or directory (gedit:4301): Gtk-WARNING **: Attempting to set the permissions of `/root/.local/share/recently-used.xbel', but failed: No such file or directory What in the world?

    Read the article

  • Install proprietary drivers 14.04 NVIDIA (steam segmentation issue)

    - by allthosemiles
    Recently, I finally got the official drivers for my NVIDIA 560 Ti card installed on Ubuntu 14.04 (hooray) However I started looking into installing Steam and I'm getting segmentation errors when I try to run the software. I tried installing 32-bit libs and it seemed like they weren't available or they were already installed. Upon further investigation, I found that a solution is to install the proprietary drivers, install steam then switch back to the other drivers. I'm not really sure what "proprietary drivers" are in all honesty. Has anyone gone through this process that could provide some insight here? (I installed the official 64-bit driver from the NVIDIA site for my 560 Ti just for reference. And the Ubuntu version installed is 64-bit as well) Update: This is the error text I get when trying to run steam after installing it via the ubuntu store. Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 3943 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete! Restarting Steam by request... Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME has been set by the user to: /home/dbrewer/.steam/ubuntu12_32/steam-runtime Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 4066 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" What I get when I run "steam --reset" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete!

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Configuring Weblogic Server 10.3.6 from 32-bit mode to 64-bit mode

    - by Ekta Malik
    This post pertains to the configuration of Weblogic Server from 32-bit mode to 64-bit mode on Solaris OS. Just in case, you have WLS 10.3.6 running in 32-bit mode and the JDK being used is installed for 64-bit mode [On Solaris OS, JDK 64-bit installation comprises of installing 32-bit JDK followed by a patch for 64-bit JDK].  Verification of the mode being used One can verify the mode of Weblogic Server in the following ways Either check the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware. Look for the patterns - SUN_ARCH_DATA_MODEL and JAVA_USE_64BIT in the file. For 32-bit mode, the parameters would appear as shown belowSUN_ARCH_DATA_MODEL="32"JAVA_USE_64BIT=false Check the server console logs; which JDK is being used during start-up By checking which JDK is used by the running process of Weblogic Server Configuration Steps Take a backup of the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware Modify the commonEnv.sh script for the following parameters: The values should be 64 and true respectively for 64-bit modeSUN_ARCH_DATA_MODEL="64"JAVA_USE_64BIT=true  Restart the weblogic server. One can confirm that the JDK being used is 64-bit by looking at the Weblogic console logs during server start up or by looking at the running process.

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • 403 Forbidden when trying to download file that was uploaded using SSH

    - by Simon Hartcher
    I have FTP access to an Apache server on linux to upload files so that they can be downloadable from the web. I recently was granted SSH access for extra permissions and figured that it would be quicker to download the files directly to the server, instead of downloading them to my machine then FTPing to the server. When I downloaded a file using SSH to the server, and then placed it in the public_html directory, it was not visible from the web. The permissions (from SSH and the FTP client) were the same as all the other files that are visible, but it was not visible in the directory listing, and if I tried to type in the filename into my browser I would get a 403 error. Obviously, when I FTP a file to the server something else happens that makes it web visible, that I am not currently privy to. What am I missing that is causing the file to be invisible from the web?

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • A specific user is unable to log in to vsftpd

    - by HackToHell
    I am setting up a new user let his name be ftpguy. He has access to only one directory /var/www/xxx. I have already chowned the directory so that he has write and read privileges. The user is also unable to login via ssh as I have disabled that by changing his shell to /sbin/nologin. Also, in vsftpd config, I have enabled the chroot_local_user. Now whenever I log in from ftp, i get an auth error. Connect socket #1008 to xxxxxxxx, port 21... 220 Welcome to blah FTP service. USER ftpguy 331 Please specify the password. PASS ********** 530 Login incorrect. I changed the password to something different several times, using the passwd command, nothing happens, i still the above error. However I am able to log in with my ssh creditals to my ftp server without any problems.(I do not use a key).

    Read the article

  • Permission denied install Joomla CiviCRM

    - by Tim
    Dear All, I am trying to install CiviCRM on a Joomla 1.5.17 web server running Ubuntu 9.10. Uploading the package to the tmp directory in /var/www/[site name]/tmp and installing creates this error: Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: include_once(/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php) [function.include-once]: failed to open stream: Permission denied in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: include_once() [function.include]: Failed opening '/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php' for inclusion (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: require_once(DB.php) [function.require-once]: failed to open stream: No such file or directory in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Fatal error: require_once() [function.require]: Failed opening required 'DB.php' (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Initially I got a permissions denied error and thought that Joomla did not have permissions to all its directories but looking at Help-System information all the necessary directories are writable. I then decided to chmod 777 all the directories and try again but it still fails. Looking at the directories afterwards it seems that the new directories being created are not being created 777. By changing them I can get at least one step further before the error appears again. My question is does anyone know how to get round this? I am thinking that the new directories being created will require sudo privileges to have mv and create actions carried out, hence the permission denied errors. Can this be configured in Joomla? Or is there a way to specify that new directories created in /var/www/[site name] take 777 by default? any help greatly appreciated! EDIT: P.S. if anyone could give me a clue as to how the insert code feature works as well that would be great! Might make this post a bit more readable! EDIT2: Well I have had a bash at changing the permissions and ownership. sudo chown -R www-data:www-data /var/www/trbcp I then tried changing the whole /var directory (insecure I know but this is a test and dev server for me to find my feet on) to 777 and still getting permission errors. It seems to be error opening stream? Not a php guy so not sure what that is but could it be that permissions to run php script need to change? any thoughts greatly appreciated.

    Read the article

  • How to install SIP+PyQt with apt-get + pip + virtualenv?

    - by kjo
    [I originally posted this question, under a different title, in StackOverflow (here), but later I realized that my problem is very specific to apt-get, hence I am re-posting it here. Sorry for the duplication.] I'm trying to install PyQt on Ubuntu (and within a virtualenv). The list of obstacles I'm dealing with is far too long to include here, but the one I'm currently trying to get past is this: % workon myvenv (myvenv)% cd ~/.virtualenvs/myvenv/build/pyqt (myvenv)% python ./configure.py Traceback (most recent call last): File "./configure.py", line 32, in <module> import sipconfig OK, so let's install sipconfig... (myvenv)% pip install SIP Downloading/unpacking SIP Downloading sip-4.14.8-snapshot-02bdf6cc32c1.zip (848Kb): 848Kb downloaded Running setup.py egg_info for package SIP Traceback (most recent call last): File "<string>", line 14, in <module> IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py' Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 14, in <module> IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py' ---------------------------------------- Command python setup.py egg_info failed with error code 1 in /home/yt/.virtualenvs/myvenv/build/SIP Storing complete log in /home/yt/.pip/pip.log The only recipe I've found so far installing SIP is this % python configure.py % make % sudo make install ...but this recipe goes against my policy of doing all my Ubuntu installations either through apt-get (or through pip in the case of Python modules). Is there some way that I can install SIP with apt-get (and possibly pip)?

    Read the article

  • How to create a folder in SharePoint2010 root folder and set permission to it

    - by ybbest
    If you need to create a folder in SharePoint2010 root folder and set permission to it, here is piece of code that does it. In the script, I have created a folder called Temp in Logs folder under SharePoint2010 root and then I grant read/write access to the Windows group WSS_WPG and full access to the group WSS_ADMIN_WPG for that folder. $Folder=New-Item "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS\temp" -Type Directory -force $acl = Get-Acl $Folder ##The following line has been commented out , if you like to break the permission inheritance from the parent floder , uncommented the code. #$acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("WSS_ADMIN_WPG","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("WSS_WPG","Modify", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl $Folder $acl References: http://technet.microsoft.com/en-us/library/ff730951.aspx http://msdn.microsoft.com/en-us/library/tbsb79h3.aspx http://blogs.technet.com/b/josebda/archive/2010/11/12/how-to-handle-ntfs-folder-permissions-security-descriptors-and-acls-in-powershell.aspx http://chrisfederico.wordpress.com/2008/02/01/setting-acl-on-a-file-or-directory-in-powershell/

    Read the article

  • Subversion error: (405 Method Not Allowed) in response to MKCOL

    - by Sergio del Amo
    I am getting the next error while trying to commit a new directory addition. svn: Commit failed (details follow): svn: Server sent unexpected return value (405 Method Not Allowed) in response to MKCOL request for '.... I have never seen this error before. Can someone help me? Solution I managed to solve the problem: Delete the parent's directory of the folder giving the problem. Did SVN Update A folder with the same name as the new one already existed in repository. Delete this folder SVN Commit Copy the new folder, Schedule for addition and SVN Commit

    Read the article

  • Overcrowded Windows XP Folders

    - by BlairHippo
    I know that, technically, an individual Windows XP directory can hold an immense number of files (over 4.29 billion, according to a quick Google search). However, is there a practical ceiling where too many files in one directory starts having an impact on reads to those files? If so, what factors would exacerbate or help the issue? I ask because my employer has several hundred XP machines in the field at client sites, and the performance on some of the older ones is getting "sludgy." The machines download and display client-defined images, and my supervisor and I suspect that our slacktastic approach to cache management could be to blame. (Some of the directories have tens of thousands of images in them.) I'm trying to gather evidence to support or contest the theory before spending time on a coding fix.

    Read the article

  • Packages not showing up in created APT repository

    - by David
    I created an APT repository using deb-scanpackages, and it seemed to go well. When I did a apt-get update on another server, the Packages.gz file was retrieved, and all seemed well - until I went to search for the packages contained in that repository (all packages are created locally). Several recommendations suggested reprepro; I tried that. Same result - except I had to rebuild the packages with the Priority and Section lines in the control file (nothing says this anywhere). The reprepro utility also generates a complicated directory structure which required rewriting the repository entry on the requesting server. I then found that the arch directory referenced i386 and not amd64 (which was requested by the requesting server). Is it possible that the AMD64 system isn't seeing packages compiled for i386? Searching the *Packages files in /var/lib/apt/lists show that the only packages for i386 are those I added (the other files are for the server - Ubuntu 10.04.2 LTS). The server the packages were built on is Ubuntu 10.04.2 LTS i686; the requesting server is x86_64. I found some discussion at the Debian AMD64FAQ but it claims to be obsolete. It makes mention of an extended syntax for repository listings for APT, and a command dpkg-subarchitecture - neither of which work on the local AMD64 server. Do I have to build two different sets of packages?

    Read the article

  • Separate php.ini file for each Apache virtual host?

    - by Calvin L
    Is it possible to have a separate php.ini file that overrides the default php.ini file for each virtual host? I'm running Apache/2.2.14, PHP 5.3.2-1. For example I have several vhosts pointing to domains in my /var/www/ directory: /var/www/website1.com /var/www/website2.com What I'd like is to be able to place a custom php.ini file in each directory that would override the default values only for that vhost, but keep the original defaults if the value isn't specified: /var/www/website1.com/htdocs/ /var/www/website1.com/php.ini EDIT: I found more info on the topic here for those interested: http://serverfault.com/questions/34078/how-do-i-set-up-per-site-php-ini-files-on-a-lamp-server-using-namevirtualhosts

    Read the article

  • Install VLC 2.0.7 in CentOS 6.4?

    - by raaz
    I am keep failing in the installation process I have tried. I have started process as follows. yum install gcc dbus-glib-devel* lua-devel* libcddb wget http://download.videolan.org/pub/videolan/vlc/2.0.7/vlc-2.0.7.tar.xz tar -xf vlc-2.0.7.tar.xz && cd vlc-2.0.7 ./configure in the configure I am getting the error as follows configure: WARNING: No package 'libcddb' found: CDDB access disabled. checking for Linux DVB version 5... yes checking for DVBPSI... no checking gme/gme.h usability... no checking gme/gme.h presence... no checking for gme/gme.h... no checking for SID... no configure: WARNING: No package 'libsidplay2' found (required for sid). checking for OGG... no configure: WARNING: Library ogg >= 1.0 needed for ogg was not found checking for MUX_OGG... no configure: WARNING: Library ogg >= 1.0 needed for mux_ogg was not found checking for SHOUT... no configure: WARNING: Library shout >= 2.1 needed for shout was not found checking ebml/EbmlVersion.h usability... no checking ebml/EbmlVersion.h presence... no checking for ebml/EbmlVersion.h... no checking for LIBMODPLUG... no configure: WARNING: No package 'libmodplug' found No package 'libmodplug' found. checking mpc/mpcdec.h usability... no checking mpc/mpcdec.h presence... no checking for mpc/mpcdec.h... no checking mpcdec/mpcdec.h usability... no checking mpcdec/mpcdec.h presence... no checking for mpcdec/mpcdec.h... no checking for libcrystalhd/libcrystalhd_if.h... no checking mad.h usability... no checking mad.h presence... no checking for mad.h... no configure: error: Could not find libmad on your system: you may get it from http://www.underbit.com/products/mad/. Alternatively you can use --disable-mad to disable the mad plugin. [root@localhost vlc-2.0.7]# So I went to libmad http location and downloaded it and while doing make it gave me the errors.There are no errors at ./configure with libmad but make not going through. [root@localhost libmad-0.15.0b]# make (sed -e '1s|.*|/*|' -e '1b' -e '$s|.*| */|' -e '$b' \ -e 's/^.*/ *&/' ./COPYRIGHT; echo; \ echo "# ifdef __cplusplus"; \ echo 'extern "C" {'; \ echo "# endif"; echo; \ if [ ".-DFPM_INTEL" != "." ]; then \ echo ".-DFPM_INTEL" | sed -e 's|^\.-D|# define |'; echo; \ fi; \ sed -ne 's/^# *define *\(HAVE_.*_ASM\).*/# define \1/p' \ config.h; echo; \ sed -ne 's/^# *define *OPT_\(SPEED\|ACCURACY\).*/# define OPT_\1/p' \ config.h; echo; \ sed -ne 's/^# *define *\(SIZEOF_.*\)/# define \1/p' \ config.h; echo; \ for header in version.h fixed.h bit.h timer.h stream.h frame.h synth.h decoder.h; do \ echo; \ sed -n -f ./mad.h.sed ./$header; \ done; echo; \ echo "# ifdef __cplusplus"; \ echo '}'; \ echo "# endif") >mad.h make all-recursive make[1]: Entering directory `/home/raja/Downloads/libmad-0.15.0b' make[2]: Entering directory `/home/raja/Downloads/libmad-0.15.0b' if /bin/sh ./libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DFPM_INTEL -DASO_ZEROCHECK -Wall -march=i486 -g -O -fforce-mem -fforce-addr -fthread-jumps -fcse-follow-jumps -fcse-skip-blocks -fexpensive-optimizations -fregmove -fschedule-insns2 -fstrength-reduce -MT version.lo -MD -MP -MF ".deps/version.Tpo" \ -c -o version.lo `test -f 'version.c' || echo './'`version.c; \ then mv -f ".deps/version.Tpo" ".deps/version.Plo"; \ else rm -f ".deps/version.Tpo"; exit 1; \ fi mkdir .libs gcc -DHAVE_CONFIG_H -I. -I. -I. -DFPM_INTEL -DASO_ZEROCHECK -Wall -march=i486 -g -O -fforce-mem -fforce-addr -fthread-jumps -fcse-follow-jumps -fcse-skip-blocks -fexpensive-optimizations -fregmove -fschedule-insns2 -fstrength-reduce -MT version.lo -MD -MP -MF .deps/version.Tpo -c version.c -fPIC -DPIC -o .libs/version.lo cc1: error: unrecognized command line option "-fforce-mem" make[2]: *** [version.lo] Error 1 make[2]: Leaving directory `/home/raja/Downloads/libmad-0.15.0b' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/raja/Downloads/libmad-0.15.0b' make: *** [all] Error 2 how can i resolve the issue and install VLC in my Centos ? I am using CentOS 6.4 . Thank you.

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Missing APR on apache2 ./configure

    - by arby
    I want to build the latest stable version of apache2. I downloaded the source and put APR & APR-util in the srclib folder, then changed directories to ./srclib/apr and ran: ./configure --prefix=/usr/local/apr sudo make sudo make install This seemed to install APR ok, but when I run ./configure from the apr-util directory, I receive the error: configure: error: APR could not be located. Please use the --with-apr option. Using ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr, the error becomes: checking for APR... configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file. Why can't it find APR?

    Read the article

  • Issue with www to non www redirect

    - by bob
    Hello, I am on slicehost and I followed the articles that they gave for DNS redirection and the www to non www url redirection does work. However, what if I want a www.domain.com to be the default domain. Would I put www.domain.com. as my DNS record name or would I keep domain.com. as my DNS record and then do something else. Basically, what happens is if someone goes to the url www.domain.com/directory/something.html they will be redirected to domain.com and not domain.com/directory/something.html I would like the second thing to happen, not just go to domain.com and call it a day. I am running nginx and am confounded on how to solve this issue. I'm not sure whether its an nginx issue or a dns issue. Any help would be greatly appreciated!

    Read the article

  • OSX Server 3, Mac clients binding to OD and Profile Manager failing

    - by dbf
    I've made a setup containing a Mac Mini with OSX Server 3 (Mavericks 10.9.2) using Open Directory and Profile Manager (Mail, etc all set up and working). Now the thing is, internally on the local network, everything works great. Clients can bind to the OD and the users are able to login. I can install trust and settings profiles (either custom or group profiles) and all services in the profiles mentioned are being configured correctly. I can log in and out, hump around and do it a 100 times on different macs with different users, it works. My goal is to make this service publicly. The domain is with a FQDN which I own, for simplicity let's say server.domain.com. Now the only way for me to bind the clients to the OD is using LDAP mapping RCF2307 (without SSL) and a DN suffix of dc=server,dc=domain,dc=com using the Directory Utility. The options from server, or open directory will throw several errors like Connection failed to node '/LDAPv3/server.domain.com (2100). First of all I don't really understand the problem why clients can't bind to the OD like it does locally, with and without SSL (all ports are open, literally all ports are open, not just 389,636 and 1640, wasn't sure if I was missing any). When the clients are using LDAP mapping RFC2307 to bind (without SSL only), clients are able to authenticate, login and even load the Trust profile. But every Settings profile will fail with a Debug Message: Unable to find GUID in user record OD or fail to install saying missing user identification. Is there any way to get this to work without RFC2307? Because there is quite some stuff missing when using RFC2307 and not pull the mapping from the server or use open directory. Is this setup even possible? Or should I use VPN to authenticate with the OD? The network setup is a Modem/Router (DHCP off) with WAN NATted to an Airport Extreme (Using DHCP+NAT). The AE does notify with a double NAT message but I haven't had any problems with it on any other service. So WAN - 192.168.2.220 (static), AE - 10.0.1.* (dhcp) Output of DIG from the outside using dig server.domain.com ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 77 IN A 91.50.*.* (valid WAN IP) ;; SERVER 172.*.*.1#53(172.*.*.1) (iPhone) DIG locally from a client and server (same output) ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 10800 IN A 10.0.1.11 ;; AUTHORITY SECTION: server.domain.com. 10800 IN NS domain.com. (used for email send in relay) server.domain.com. 10800 IN NS server.domain.com. ;; SERVER 10.0.1.11#53(10.0.1.11) Are there any things I should check? Only have OSX. -- double NAT issue, plugged in the server directly on the Modem/Router with a static IP and issue remains. Guess that rules out the double NAT thing. -- changeip -checkhostname comes with There is nothing to change, e.g. success. Primary address = 10.0.1.11 Current HostName = server.domain.com DNS HostName = server.domain.com For now, I've made a workaround by using an admin account that forces a permanent VPN connection on boot. That means before it comes to the login, a connection is already made or underway. I will continue this post when I have more time, also locating all the necessary .log files of each application involved. I have some suspicions but have to debug a bit more when I have more time on my hands .. Unless, of course, I get sidetracked with having a life. Which is arguably not very likely. krypted.com

    Read the article

  • Multiple PHP versions running as cgi

    - by Pierre
    I'm trying to install a second version of PHP, to run alongside the current version of php. I've compiled the latest php source from github (5.5-DEV), and I'm trying to run it as CGI. Here is my virtual host config: <VirtualHost *:8055> DocumentRoot /Library/WebServer/Documents/ ScriptAlias /cgi-bin/ /usr/local/php55/cgi Action php55-cgi /cgi-bin/php-cgi AddHandler php55-cgi .php <Directory /Library/WebServer/Documents/> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all </Directory> DirectoryIndex index.html index.php </VirtualHost> But when I go to http://127.0.0.1:8055/info.php, I get the following error: Forbidden You don't have permission to access /cgi-bin/php-cgi/info.php on this server Edit I'm now switching between LoadModule php5_module /usr/local/php54/libphp5.so and LoadModule php5_module /usr/local/php55/libphp5.so It works for now, but is not ideal. I would like to have the different versions of php on different virtual hosts

    Read the article

  • What is the best Broadphase Interface for moving spheres?

    - by Molmasepic
    As of now I am working on optimizing the performance of the physics and collision, and as of now I am having some slowdowns on my other computers from my main. I have well over 3000 btSphereShape Rigidbodies and 2/3 of them do not move at all, but I am noticing(by the profile below) that collision is taking a bit of time to maneuver. Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 10.09 0.65 0.65 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 7.61 1.14 0.49 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.59 1.50 0.36 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 5.43 1.85 0.35 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 4.97 2.17 0.32 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 4.19 2.44 0.27 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 4.04 2.70 0.26 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 3.73 2.94 0.24 Ogre::OctreeSceneManager::walkOctree(Ogre::OctreeCamera*, Ogre::RenderQueue*, Ogre::Octree*, Ogre::VisibleObjectsBoundsInfo*, bool, bool) 3.42 3.16 0.22 btTriangleShape::getVertex(int, btVector3&) const 2.48 3.32 0.16 Ogre::Frustum::isVisible(Ogre::AxisAlignedBox const&, Ogre::FrustumPlane*) const 2.33 3.47 0.15 1246357 0.00 0.00 Gorilla::Layer::setVisible(bool) 2.33 3.62 0.15 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.86 3.74 0.12 btCollisionDispatcher::findAlgorithm(btCollisionObject*, btCollisionObject*, btPersistentManifold*) 1.86 3.86 0.12 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.71 3.97 0.11 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.55 4.07 0.10 _Unwind_SjLj_Register 1.55 4.17 0.10 _Unwind_SjLj_Unregister 1.55 4.27 0.10 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.40 4.36 0.09 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.40 4.45 0.09 btSequentialImpulseConstraintSolver::setupFrictionConstraint(btSolverConstraint&, btVector3 const&, btRigidBody*, btRigidBody*, btManifoldPoint&, btVector3 const&, btVector3 const&, btCollisionObject*, btCollisionObject*, float, float, float) 1.24 4.53 0.08 btSequentialImpulseConstraintSolver::convertContact(btPersistentManifold*, btContactSolverInfo const&) 1.09 4.60 0.07 408760 0.00 0.00 Living::MapHide() 1.09 4.67 0.07 btSphereTriangleCollisionAlgorithm::~btSphereTriangleCollisionAlgorithm() 1.09 4.74 0.07 inflate_fast EDIT: Updated to show current Profile. I have only listed the functions using over 1% time from the many functions that are being used. Another thing is that each monster has a certain area that they stay in and are only active when a player is in said area. I was wondering if maybe there is a way to deactivate the non-active monsters from bullet(reactivating once in the area again) or maybe theres a different broadphase interface that I should use. The current BPI is btDbvtBroadphase. EDIT: Here is the Profile on the other computer(the top one is my main) Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 12.18 1.19 1.19 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 6.76 1.85 0.66 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.83 2.42 0.57 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 5.12 2.92 0.50 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 4.61 3.37 0.45 btTriangleShape::getVertex(int, btVector3&) const 4.09 3.77 0.40 _Unwind_SjLj_Register 3.48 4.11 0.34 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 2.46 4.35 0.24 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 2.15 4.56 0.21 _Unwind_SjLj_Unregister 2.15 4.77 0.21 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.84 4.95 0.18 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.64 5.11 0.16 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 1.54 5.26 0.15 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.43 5.40 0.14 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.33 5.53 0.13 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.13 5.64 0.11 btRigidBody::predictIntegratedTransform(float, btTransform&) 1.13 5.75 0.11 btTriangleIndexVertexArray::getLockedReadOnlyVertexIndexBase(unsigned char const**, int&, PHY_ScalarType&, int&, unsigned char const**, int&, int&, PHY_ScalarType&, int) const 1.02 5.85 0.10 btSphereTriangleCollisionAlgorithm::CreateFunc::CreateCollisionAlgorithm(btCollisionAlgorithmConstructionInfo&, btCollisionObject*, btCollisionObject*) 1.02 5.95 0.10 btSphereTriangleCollisionAlgorithm::btSphereTriangleCollisionAlgorithm(btPersistentManifold*, btCollisionAlgorithmConstructionInfo const&, btCollisionObject*, btCollisionObject*, bool) Edited same as other Profile.

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >