Search Results

Search found 5628 results on 226 pages for 'extension'.

Page 175/226 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Problem installing SQLite3 RubyGem on Ubuntu

    - by misbehavens
    I am having a problem trying to install the SQLite3 RubyGem. Here's what I'm doing: $ sudo gem install --remote sqlite3-ruby Here's the output: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for fdatasync() in -lrt... yes checking for sqlite3.h... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • Windows Media Center doesn't see my movies

    - by DrJekyll
    I am trying to configure my Windows Media Center (Windows 7 Ultimate). I selected folder with my movies and added it to the library, but when I went to the movies library, it says "There are no items in this library yet - Windows Media Center is searching for media files in the background...". I have all necessary codecs installed, Windows Media Player opens those movies correctly. When I right click on the file - Open with - Windows Media Center it also plays them without any problem. Any ideas why they don't appear in the libraries? Edit: Movies are coded with divx and xvid codecs and they have ".avi" extension. Windows doesn't have problems playing them. I told Media Center where the files are. I even pointed Windows Media Center to a folder with only one .avi file it still couldn't find anything there. (I have given it quiet some time, even though searching in the directory with only one file shouldn't take more than a few seconds.) When I add a folder with a lot of movies, I get a dialog box "You can wait while media is added or select OK to continue using Windows Media Center.".                                                                        At the end it says it added about 90 movies, but when I go to the libraries, it's still empty.

    Read the article

  • Joomla SMTP Configuration Issue

    - by msargenttrue
    I'm having an issue with the SMTP setup of my Joomla website when trying to send mass emails through the CB Mailing (Mass Email) extension. I receive this error: SMTP Error! The following recipients failed: Number of users to whom e-mail was sent: 0 (Total in list: 1) The old version of this websites mass emailer worked fine, however, in order to add Kunena Forum and maintain compatibility I had to make several upgrades to the site. Both the new version and old verson configurations are outlined below. Server for Website: Mac OS X Server 10.4.11, Apache 1.3.4.1, PHP 5.2.3, MySQL 4.1.22 Server for SMTP: Eudora Internet Mail Server 3.3.9 (EIMS Server X) New Configuration: Joomla 1.5.25, Community Builder 1.7.1, CB Paid Subscriptions (CB Subs) 1.2.2, CBMailing 2.3.4, Kunena Forum 1.7.0, Legacy 1.0 plug-in disabled Mail Settings (New Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Security: None SMTP Port: 25 SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 Old Configuration (Working SMTP Configuration): Joomla 1.5.9, Community Builder 1.2, CB Paid Subscriptions (CB Subs) 1.0.3, CB Mailing 2.1, Legacy 1.0 plug-in enabled Mail Settings (Old Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 (Notice how the older version of Joomla is missing the 2 fields: SMTP Security and SMTP Port) Thanks in advance!

    Read the article

  • SSD seems dead after wakeup from Windows Sleep, BIOS stalls but doesn't find it anymore

    - by Abel
    The morning, the following scary scenario happened: I woke up my Windows system Typed in my username and got an error (something like "could not load security xxx", but unsure of exact wording) System auto-restarted after cliking OK It didn't boot up anymore to the SSD with Windows 7 OS (I have another disk I can boot to, but that doesn't see the disk either). Obviously, this happened right after I instantiated a backup procedure, which hasn't succeeded either. The BIOS can't find the drive when I connect to SATA. And it can't find the drive when I connect it to SAS. I have a Dell Workstation T7400, most recent BIOS (version A06), version of SAS Host Bus Adapter BIOS (HBA) is MPTBIOS 6.14.10.00 (2007.09.29) from LSI Logic Corp. Other findings: When connecting to SATA, the DELL Logo screen stays really long (5 minutes) and then at the end of POST it says that a drive is not found When connecting to SAS, the SAS HBA initializing phase takes long (2 minutes, against normally 15 seconds) When running Dell Diagnostics, it doesn't finish and gives the error Exception occurred in module MPCACHE.MDM file "IOAPICSP.ASM" line 1645. I contacted Dell. On their advice I tried different slots and different cables to no avail. I use an APIC battery power, spikes in the power are thus unlikely. My conclusion so far: the disk is dead. I need this disk very badly because it contains the last few days of important development of which not all code was checked in the moment this happened. Are there any ways to recover dead SSD drives? The drive is a new X25-M G2 160GB model SSDSA2M160G2GC 2.5" in an extension bay and has been running without issues for 3 months on SAS.

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

  • How do I correctly SSH port forward using LiveReload on Redhat?

    - by program247365
    Referencing this page: http://feedback.livereload.com/knowledgebase/articles/86280-if-you-edit-files-directly-on-your-server It says you can remotely port forward the LiveReload specific port of 35729, using this command: ssh -L 35729:127.0.0.1:35729 mylogin@myremoteserverIP When I run the -v option, I get: debug1: Local connections to LOCALHOST:35729 forwarded to remote address 127.0.0.1:35729 debug1: Local forwarding listening on ::1 port 35729. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 35729. debug1: channel 1: new [port listener] debug1: channel 2: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: client_input_channel_req: channel 2 rtype [email protected] reply 1 debug1: Connection to port 35729 forwarding to 127.0.0.1 port 35729 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 35729 for 127.0.0.1 port 35729, connect from 127.0.0.1 port 63673, nchannels 4 I thought editing my /etc/services with this line, would work, but it doesn't: livereload 35729/tcp # livereload usage with guard-livereload Every time I attempt to connect with the browser extension, I believe It's getting blocked by my server. What am I missing here? Do I need to edit /etc/services for this to work?

    Read the article

  • Strange activity in My Pictures folder: Thumbnail doesn't match actual picture.

    - by Sam152
    After finding an amusing picture on a popular imageboard, I decided to save it. A few days past and I was browsing my images folder when I realised that the thumbnail generated by Windows XP in the thumbnails view did not match the actual image. Here is a comparison image: What's even stranger in this situation is that the parts of the photograph that are different have actually been replaced with what might be the correct background. Furthermore, it is a jpeg (no PNG transparency tricks) that is 343 kilobytes but only 847x847 pixels wide. What could be going on here? Could there be anything malicious in the works, or hidden data? Before anyone asks, I have checked and preformed the following: Deleted Thumbs.db to reload thumbnails. Opened image in different editors. (they appear with the text) Moved image to a different directory. Changed the extension to .rar. All these steps produce the same results. Pre actual posting update: It seems that opening the image in paint, changing the image entirely (deleting entire contents and making a red fill) will still generate the original thumbnail, even after deleting Thumbs.db etc. I'm also hesitant to post the original data, in case there is something malicious or hidden that could be potentially illegal. (Although it would be very beneficial to see if it works on other computers and not just my own).

    Read the article

  • Not able to run firefox in a head-less Ubuntu server 9.10

    - by Julio J.
    I need to run Firefox in my server in order to execute some Selenium tests from Hudson. I would love no to have to install a complete gui. So I installed Xvfb in order to fake the Gui (I understand it this way correct me if my assumptions are wrong). After some time trying to make it work, I'm stuck with the next situation: $ sudo Xvfb -ac :99 & [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) $ firefox [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! [config/dbus] couldn't register object path (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) Xlib: extension "RANDR" missing on display ":99.0". GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: /bin/dbus-launch terminated abnormally without any error message) I'm runnig firefox without installing it from the repositories. And I'm getting a socket timeout when I try to run the selenium tests, so I guess the problem is in Firefox and Xvfb. I have installed already the nex package: i gconf-defaults-service - GNOME configuration database system (system defaults service) That in some forums suggest to be a fix, that in my case doesn't work. Any explanation about the problem and ways of solving it without installing a full gui, will be very helpful.

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Terrible noises from subwoofer of ACER Aspire 6930 with Realtek sound chip

    - by OneWorld
    After approximately 5-15 min of listening to music my subwoofer begins to make terrible noises. He's just "coughing". That began after 6 months I had this computer. Now I found out, that I can temporarily fix this problem by "restarting" the audio stream of the application that plays music. For example reloading last.fm page (reloads the flash file). Another way to reset the audio playback is switching the speaker configuration shown below in the screenshot. According to many posts on the internet like http://www.tomshardware.co.uk/forum/52918-20-acer-aspire-6935g-speaker-problem ACER support isn't any help Exchanging hardware doesn't fix the problem Even the later models have this problem Turning off the volume of the subwoofer is not an option to me. I still have warranty (I bought an extension of one year). I already tried about 15 versions of the Realtek driver with no success. I am not sure but MAYBE the problem did not occur on the original windows vista that was shipped with this computer. However, I removed the original windows for good reasons (english). What do you suggest me? Did anyone fix this problem? Maybe by writing a script which resets the audio streams every 5 minutes? Shall I take the effort to deal with the acer support until they give me another model? (I won't have a computer than for a longer time, will spend money on telephone hotlines (1,30 EUR / min)......) Here are additional infos, if they are any help: Windows 7 64 Bit (Original was Windows Vista Home Premium 32 Bit) All specs Audio driver version:

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • FastCGI Error when installing PHP on IIS7.5

    - by ytoledano
    I'm trying to install MediaWiki on a Win2008r2 server, but can't manage to install PHP. Here's what I did: Grabbed a Zip archive of PHP and unzipped it into C:\PHP. Created two subdirs: c:\PHP\sessiondata and c:\PHP\uploadtemp. Granted modify rights to the IUSR account for the subdirs. Copied php.ini-production as php.ini Edited php.ini and made the following changes: fastcgi.impersonate = 1 cgi.fix_pathinfo = 1 cgi.force_redirect = 0 open_basedir = "c:\inetpub\wwwroot;c:\PHP\uploadtemp;C:\PHP\sessiondata" extension = php_mysql.dll extension_dir = "./ext" upload_tmp_dir = C:\PHP\uploadtemp session.save_path = C:\php\sessiondata Install Web server role, selected CGI and HTTP Redirection options. In the Handler Mappings: Added Module Mapping. Entered the following values: Path = *.php, Module = FastCgiModule, Executable = c:\php\php-cgi.exe, Name = PHP via FastCGI. Created a test page into wwwroot directory: phpinfo.php and set the contents like this: < ?php phpinfo(); ? Browsed to http://localhost/phpinfo.php But then I get: HTTP Error 500.0 - Internal Server Error An unknown FastCGI error occured Detailed Error Information Module: FastCgiModule Notification: ExecuteRequestHandler Handler: PHP via FastCGI Error Code: 0x800736b1 Requested URL: http://localhost:80/phpinfo.php Physical Path: C:\inetpub\wwwroot\phpinfo.php Logon Method: Anonymous Logon User: Anonymous Does anyone know what I'm doing wrong here? Thanks.

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • How can I make Internet Explorer 6 render Web pages like Internet Explorer 11?

    - by gparyani
    Now, I know that this may seem like a bad question in that I can just upgrade to Internet Explorer 8, but I am sticking with IE6 in that IE8 removes valuable features, like the ability to save favorites offline and the fact that a file path turns into a Windows Explorer window and typing a Web address into Windows Explorer changes it into an IE window. I know that Internet Explorer 6 does a really bad job at rendering some pages. I know of the Google Chrome Frame extension that brings Chrome-style rendering into IE, but that will soon be discontinued. So, I tried another thing: I know that C:\Windows\System32\mshtml.dll contains the Trident rendering engine that is used by IE, so I tried something: I first backed up the original file by renaming it on Windows XP to mshtml-old.dll, then I tried to copy in the DLL from a computer running Windows 7 with Internet Explorer 10. I noticed that, after copying, the system had replaced the new DLL with the old one, but left the one I backed up intact. Is there any way I can get the system to not replace the DLL like that so that I can transfer in IE11's mshtml.dll into Windows XP and make IE6 render like IE11? I'm looking for an answer that describes how to tweak my system to make IE6 render like IE11 (or IE10), not one that tells me to upgrade IE or install another browser. I don't care how tedious the method is, just as long as it works.

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • How to install the MySQL Ruby Gem on Ubuntu 9.10?

    - by misbehavens
    I am having a problem installing the Ruby Gem for MySQL. This is the command that I am running: sudo gem install mysql and this is the output that I'm getting: Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib --with-mlib --without-mlib --with-mysqlclientlib --without-mysqlclientlib --with-zlib --without-zlib --with-mysqlclientlib --without-mysqlclientlib --with-socketlib --without-socketlib --with-mysqlclientlib --without-mysqlclientlib --with-nsllib --without-nsllib --with-mysqlclientlib --without-mysqlclientlib --with-mygcclib --without-mygcclib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1/ext/mysql_api/gem_make.out What do I need to do in order to get this to install?

    Read the article

  • How to install rmagick on Ubuntu 10.04?

    - by Andrew
    Here's what I've done so far: sudo apt-get install imagemagick libmagickcore-dev This did not throw any errors, so I think that ImageMagick is installed fine. Then I tried installing the gem: sudo gem install rmagick This resulted in the following error: ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... yes checking for ImageMagick version >= 6.4.9... yes checking for HDRI disabled version of ImageMagick... yes checking for stdint.h... yes checking for sys/types.h... yes checking for wand/MagickWand.h... no Can't install RMagick 2.13.1. Can't find MagickWand.h. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out What do I need to do to install rmagick on Ubuntu 10.04?

    Read the article

  • Can't seem to stop Postfix backscatter

    - by Ian
    I've just migrated to a Postfix system and can't seem to stop the backscatter messages to unknown addresses on the site. I have a file, validrcpt, that lists all the valid emails on the site - about eight of them. Yet when a message is sent to a non-existent address, instead of just dropping it, postfix is replying with a "Recipient address rejected: User unknown in virtual mailbox table" email. Do I have something set wrong? I've read http://www.postfix.org/BACKSCATTER_README.html but unless I'm caffeine deficient, I don't see what's happening and perhaps I'm just to used to my old qmail setup. Here's postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix content_filter = smtp-amavis:[127.0.0.1]:10024 home_mailbox = Maildir/ inet_interfaces = all inet_protocols = ipv4 local_recipient_maps = hash:/etc/postfix/validrcpt mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/dovecot.conf -m "${EXTENSION}" mailbox_size_limit = 0 mydestination = localhost myhostname = localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname policy-spf_time_limit = 3600s readme_directory = no recipient_bcc_maps = hash:/etc/postfix/recipient_bcc recipient_delimiter = + relay_recipient_maps = hash:/etc/postfix/relay_recipients relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination,check_policy_service unix:private/policy-spf,reject_rbl_client zen.spamhaus.org,reject_rbl_client bl.spamcop.net,reject_rbl_client cbl.abuseat.org,check_policy_service inet:127.0.0.1:10023 smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = reject_unknown_sender_domain smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/dovecot/dovecot.pem smtpd_tls_key_file = /etc/dovecot/private/dovecot.pem smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = digitalhit.com virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000

    Read the article

  • wget recursively download from pages with lots of links

    - by Shadow
    When using wget with the recursive option turned on I am getting an error message when it is trying to download a file. It thinks the link is a downloadable file when in reality it should just be following it to get to the page that actually contains the files(or more links to follow) that I want. wget -r -l 16 --accept=jpg website.com The error message is: .... since it should be rejected. This usually occurs when the website link it is trying to fetch ends with a sql statement. The problem however doesn't occur when using the very same wget command on that link. I want to know how exactly it is trying to fetch the pages. I guess I could always take a poke around the source although I don't know how messy the project is. I might also be missing exactly what "recursive" means in the context of wget. I thought it would run through and travel in each link getting the files with the extension I have requested. I posted this up over at stackOverFlow but they turned me over here:) Hoping you guys can help.

    Read the article

  • Python module: Trouble Installing Bitarray 0.8.0 on Mac OSX 10.7.4

    - by Gabriele
    I'm new here! I have trouble installing bitarray (vers 0.8.0) on my Mac OSX 10.7.4. Thanks! ('gcc' does not seem to be the problem) Last login: Sun Sep 9 22:24:25 on ttys000 host-001:~ gabriele$ gcc -version i686-apple-darwin11-llvm-gcc-4.2: no input files host-001:~ gabriele$ Last login: Sun Sep 9 22:18:41 on ttys000 host-001:~ gabriele$ cd /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/bitarray-0.8.0/ host-001:bitarray-0.8.0 gabriele$ python2.7 setup.py installrunning install running bdist_egg running egg_info creating bitarray.egg-info writing bitarray.egg-info/PKG-INFO writing top-level names to bitarray.egg-info/top_level.txt writing dependency_links to bitarray.egg-info/dependency_links.txt writing manifest file 'bitarray.egg-info/SOURCES.txt' reading manifest file 'bitarray.egg-info/SOURCES.txt' writing manifest file 'bitarray.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.6-intel/egg running install_lib running build_py creating build creating build/lib.macosx-10.6-intel-2.7 creating build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/__init__.py -> build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/test_bitarray.py -> build/lib.macosx-10.6-intel-2.7/bitarray running build_ext building 'bitarray._bitarray' extension creating build/temp.macosx-10.6-intel-2.7 creating build/temp.macosx-10.6-intel-2.7/bitarray gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c bitarray/_bitarray.c -o build/temp.macosx-10.6-intel-2.7/bitarray/_bitarray.o unable to execute gcc-4.2: No such file or directory error: command 'gcc-4.2' failed with exit status 1 host-001:bitarray-0.8.0 gabriele$

    Read the article

  • Making libmagic/file detect .docx files

    - by Jonatan Littke
    As seen elsewhere, docx, xlsx and pttx are ZIPs. When uploading them to my web application, file (via libmagic andpython-magic) detects them as being ZIP. I store the contents of the file as a blob in the database, but naturally I don't want to trust the user with what kind of file type this is. So I would like to trust file for and automatically generate a filename during download. I know one can modify /etc/magic but the format (magic(5)) is way too complicated for me. I found a bug report on the issue at Debian bugs but since it's from 2008 it doesn't seem to be fixed any time soon. I guess my only other alternative is to indeed trust the user (but still store the contents as a blob) and only check the file extension based on the file name. This way I can disallow some extensions and allow others. And when the user re-downloads his file, he can have it in whatever way he uploaded it. But this solution is insecure if the file is shared with others, since you can simply rename the file to allow uploading it. Any ideas? Lastly, I found a list of magic numbers for docx etc, but I'm unable to convert these into the magic(5) format.

    Read the article

  • Code to update HyperV Export file

    - by Andy Schneider
    I am using the HyperV Module from Codeplex to do a "config only" export from a 2008R2 Hyper-V server. In order to import the configuration on another HyperV server, I need to edit the value of CopyVMStorage in the EXP file. This file is an XML file. I wrote the following code in PowerShell to do the update for me. The variable $existing is the existing exp file. $xml = [xml](get-content $existing) $xpath = '//PROPERTY[@NAME ="CopyVmStorage"]' foreach ($node in $xml.SelectNodes($xpath)) {$node.Value = 'TRUE'} $xml.Save($existing) This code makes the correct changes to the XML. However, when I go to import the file on the Hyper-V server, I get an error that says the file format is incorrect. I am wondering if the encoding of the file is incorrect or if there is something else going on. If I edit the file manually in wordpad, it imports without an issue. The filename is a GUID with a .exp extension, and it appears that the file name is too long for notepad to open. Notepad throws an error trying to open the file, which is why I went with WordPad. I have noticed that the file that is updated with PowerShell comes out formatted whereas the raw file is xml all bunched together with no whitespace. Any ideas on what "file format" means in this HyperV error message and how I might be able to use my code to automate this change in the XML without changing the file format?

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • Why is my apache2, mod_fcgid, php configuration causing 100% cpu usage?

    - by Scott Lundgren
    Page load makes a quick initial connection, then hangs about 10 seconds before the page renders. When the server load goes up I start watching top & I see that both CPUs get pegged at times to 100% by between 4-8 processes of php-cgi. My theory is that since I never see RAM usage never go above 50%, that apache is able to handle the requests coming in, but is queueing them for PHP to process. What is wrong with my mod_fcgid/php configuration ? RHEL 5.4 2 Xeon E5420s @ 2.50 Ghz 4 Gb RAM Apache 2.2.3 Timeout 30 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 5 <IfModule worker.c> StartServers 2 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> mod_fcgid 2.2.10 LoadModule fcgid_module modules/mod_fcgid.so <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl php </IfModule> SocketPath run/mod_fcgid SharememPath run/mod_fcgid/fcgid_shm DefaultInitEnv PHPRC "/etc/" FCGIWrapper /usr/bin/php-cgi .php MaxRequestsPerProcess 1500 MaxProcessCount 20 IPCCommTimeout 240 IdleTimeout 240 APC 3.0.19 extension = apc.so apc.enabled=1 apc.shm_segments=1 apc.optimization=0 apc.shm_size=32 apc.ttl=7200 APC cache is 43% used with a 99% hit rate

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >