Search Results

Search found 11924 results on 477 pages for 'openoffice org'.

Page 428/477 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • TrueCrypt drive letter not available

    - by Tono Nam
    With c# or a batch file I mount a trueCrypt volume located at A:\volumeTrueCrypt.tc With c# I do: static void Main(string[] args) { var p = Process.Start( fileName:@"C:\Program Files\TrueCrypt\TrueCrypt.exe", arguments:@"/v a:\volumetruecrypt.tc /lw /a /p truecrypt" ); p.WaitForExit(); } the alternative is to run the command on the command line as: C:\Windows\system32>"C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lw /a /p truecrypt Either way I get the error: Why do I get that error? I was able to run that command the first time. The moment I dismounted the volume and tryied to mount it again I got that error. I know that drive letter W is available because it shows as an available letter on true crypt if I where to open it manually: If I where then click on the button mount and then type the password truecrypt (truecrypt is the password) then it will successfully mount on drive w. Why I am not able to mount it from the command line!? If I change the drive letter on the command line it works. I want to use the drive W though. In other words executing "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lz /a /p truecrypt will successfully mount that volume on drive z but I do not want to mount it on drive z I want to mount it on drive w. The first time I ran the batch it ran fine. Also if I restart my computer I believe it should work. More info on how to use trueCrypt through the command line can be found at: http://www.truecrypt.org/docs/?s=command-line-usage Edit I was also investivating when does this error occures. In order to generate this error you need to follow this steps. 1) execute the command: (note the /q argument at the end for quiet) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt /q "C...TrueCrypt.exe" = location where trueCrypt is located /v "path" = location where volume is located /n = drive letter n /p truecrypt = password is "trueCrypt" /q = execute in quiet mode. do not show window note I am mounting to drive letter n 2) now volume should be mounted. 3) Open trueCrypt and manually dismount that volume (without using command line) 4) Attempt to run the same command line (without the /q so you see the error) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt 5) an error should show up So the problem ocures when I manually dismount the volume. If I dismount it from the command line I get no errors. But I think this is a bug from trueCrypt

    Read the article

  • LaunchDaemon causing Lion to hang on boot

    - by Brett
    I've got a Mac Mini 2011, which I intend to use for a few tasks such as Plex and running a few VM's. I've installed virtualbox, along with XAMPP and phpvirtualbox, which all worked fine. However, getting this to run on startup is proving a real PITA! I'm at the moment trying to get vboxwebsrv running on boot. I've created a launchd plist within /Library/LaunchDaemons to run it and it works fine... well sort of. Lion when booting will show the spinning wheel and stop, never showing a GUI - however if I remote in via screen sharing or SSH, I can login fine and see that vboxwebsrv has launched successfully. Setting this plist to disabled makes lion boot up fine again. Initially I thought it was due to it staying open, so tried to add -b which causes it to run in the background, this just caused launchd to constantly spawn new processes and didn't even fix my problem of Lion being stuck at the spinning wheel. Does anyone have any ideas? I'm losing my mind here! PLIST: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Disabled</key> <false/> <key>KeepAlive</key> <false/> <key>UserName</key> <string>vbox</string> <key>RunAtLoad</key> <true/> <key>OnDemand</key> <false/> <key>Label</key> <string>org.virtualbox.vboxwebsvc</string> <key>ProgramArguments</key> <array> <string>/Applications/VirtualBox.app/Contents/MacOS/vboxwebsrv</string> </array> </dict> </plist>

    Read the article

  • Nameserver not resolving or domain not pingable [closed]

    - by Ricky
    Sorry, if anyone can think of a better title please change it! I want to host my own websites from home. For testing purposes, I have a virtual machine running a trial version of Windows Server 2008 Enterprise. Note I currently run a VPS and host my own websites but due to a nice speed upgrade on our line I now want to host from home. I have several domains but I wanted to test with one, that is rickyoleary.com. Our ISP does not provide static IP addresses unless we have a business account so I've been looking at no-ip.com. I admit my networking isn't the best, hence this question but I've been bashing my head all day on this one. I created a host name, muffinbubble.no-ip.org which runs on IP: 86.148.124.15. I've setup IIS on the server with a simple test page. I've then forwarded port 80 traffic from the router and from what I can see, it's working. If I access my website (I was unable to link to this for some reason so please copy and paste this) - http://86.148.124.15/ - I see my test page. So the next step was to create my nameservers. This domain is with namecheap.com so I created my nameservers, ns1.rickyoleary.com and ns2.rickyoleary.com. Both these point to the same IP (and yes, that will be changed after testing), the same IP as above: 86.148.124.15. On the server itself I have set up DNS entries as below which I believe to be correct and added rickyoleary.com and www.rickyoleary.com in the host headers (or bindings) in IIS 7.0. If I try and look up my domain, rickyoleary.com it shows ns1.rickyoleary.com and ns2.rickyoleary.com as the nameservers. I then tried to use just-ping.com on my nameserver ns1.rickyoleary.com. I get 100% packets lost, but the correct IP address is returned (I'm guessing the router does not allow pings, but is still accessible...). I get no response when pinging rickyoleary.com. Here's the problems: I cannot ping ns1.rickyoleary.com or ns2.rickyoleary.com from a command prompt. I'm not sure if this is an issue. When I added the nameservers in Windows Server 2008 and clicked 'resolve' a message box displays stating "No such host is known". I cannot ping rickyoleary.com. rickyoleary.com is not showing my test page on my server. Now - please note, I've waited around 6 hours for propagation. From my experience, although you're told to wait 24 - 48 hours, the changes are normally pretty quick so perhaps I'm being impatient or naive to think it should all be working fine until then. I would really appreciate some help here. Thanks.

    Read the article

  • Can't detect wireless networks, running ubuntu 9.10 on eeepc 1000he

    - by David Brown
    Hi, I am running Ubuntu 9.10 on an ASUS eeepc 1000HE, and my wireless card, RaLink RT 2860, is failing to recognize any wireless networks. It was working fine this morning, until I accidentally hit Fn F2, disabling the wireless card. I hit Fn F2 again to reenable it, but it no longer will detect any wireless networks (and other computers in the household do recognize them). Any help at all will be greatly appreciated! I googled quite a bit and talked to a human about this but could not fix it. We tried dhclient ra0, which returned There is already a pid file /var/run/dhclient.pid with pid 2438 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.2 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ Listening on LPF/ra0/00:25:d3:13:f4:20 Sending on LPF/ra0/00:25:d3:13:f4:20 Sending on Socket/fallback DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 6 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 9 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. No working leases in persistent database - sleeping. Other data: iwconfig returns (other stuff here...) ra0 RT2860 Wireless ESSID:"" Nickname:"RT2860STA" Mode:Auto Frequency=2.412 GHz Bit Rate=1 Mb/s RTS thr:off Fragment thr:off Link Quality=10/100 Signal level:0 dBm Noise level:-87 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 ifconfig returns (other stuff here...) ra0 Link encap:Ethernet HWaddr 00:25:d3:13:f4:20 inet6 addr: fe80::225:d3ff:fe13:f420/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:583 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 ra0:avahi Link encap:Ethernet HWaddr 00:25:d3:13:f4:20 inet addr:169.254.7.117 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 Interrupt:19

    Read the article

  • Bind9 virtual subdomains

    - by Steffan
    I am trying to setup virtual subdomains using Bind9, following this tutorial.. http://groups.drupal.org/node/16862 which I've completed. Basically setting up the zone and modifying the resolv.conf file and the named.conf.local file. I've gotten everything to work, and I am able to from my server ping mydomain.com , test.mydomain.com and when i do a dig I get the following.. ; <<>> DiG 9.7.0-P1 <<>> test.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32606 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;test.mydomain.com. IN A ;; ANSWER SECTION: test.mydomain.com. 86400 IN A 174.###.###.# ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS mydomain.com. ;; ADDITIONAL SECTION: mydomain.com. 86400 IN A 174.###.###.# ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Jan 19 21:06:01 2011 ;; MSG SIZE rcvd: 86 So it looks like everything is working. However, when I try and do test.mydomain.com in the browser, expecting it to default for now to mydomain.com it does not work and I get a server not found page in Firefox. I did read elsewhere that in your virutalhosts file you also need to setup a *.mydomain.com alias, but that didn't fix anything. Any other information that I could provide to help troubleshoot, or any troubleshooting suggestions? I am using Ubuntu 10.4, with typical LAMP setup. The only other things installed on the server are Bind9 and ftp client.

    Read the article

  • libvirt qemu/kvm migration problem

    - by Panda
    I am using kvm and libvirt on my Dell server. Now i am trying to migrate one virtual machine from a physical server to another. However, I failed everytime. In virsh on physicalServer1, I typed: virsh # migrate virtualmachine1 qemu+ssh://username@physicalServer2/system error: operation failed: migration to 'tcp:physicalServer2:49163' failed: migration failed Then I searched FAQ part on libvirt.org. It says: error: operation failed: migration to '...' failed: migration failed This is an error often encountered when trying to migrate with QEMU/KVM. This typically happens with plain migration, when the source VM cannot connect to the destination host. You will want to make sure your hosts are properly configured for migration (see the migration section of this FAQ) I managed to ssh physicalServer2 from a shell on virtualmachine1 so the above red part did not explain my failure. I also open ports on physicalServer2, iptables -L shows following information: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:49152:49215 Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer2: 2011-05-06 13:37:30.708: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -enable-kvm -m 2048 -smp 1,sockets=1,cores=1,threads=1 -name openjudge-test -uuid a8c704bc-a4f9-90db-3e57-40e60b00aac1 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/virtualmachine1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c -drive file=/media/nfs/virtualmachine1.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:16:36:8a:22 :a0,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:2 -vga cirrus -incoming tcp:0.0.0.0:49163 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/0 2011-05-06 13:37:30.915: shutting down The /var/log/libvirt/qemu/virtualmachine1.log on physicalServer1 is empty. Both physical servers are using Ubuntu 11.04. The libvirt and kvm used are installed by apt-get. The libvirt version is 0.8.8.

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • Configure Postfix to Port other than 25

    - by bwheeler96
    I've done quite a bit of googling on how to reconfigure postfix to work on a different port, but I still can't fond the line(s) people keep talking about in my master.cf. I'm using OS X Mountain Lion, and my ISP blocks traffic both ways on port 25. people have said to look for a line that says smtp inet n - n - - smtpd I can't find it. This is (what I believe to be) unmodified # ==== Begin auto-generated section ======================================== # This section of the master.cf file is auto-generated by the Server Admin # Mail backend plugin whenever mails settings are modified. smtp inet n - n - 1 postscreen smtpd pass - - n - - smtpd dnsblog unix - - n - 0 dnsblog tlsproxy unix - - n - 0 tlsproxy submission inet n - n - - smtpd -o smtpd_tls_security_level=encrypt smtp unix - - n - - smtp # === End auto-generated section =========================================== # Modern SMTP clients communicate securely over port 25 using the STARTTLS command. # Some older clients, such as Outlook 2000 and its predecessors, do not properly # support this command and instead assume a preconfigured secure connection # on port 465. This was sometimes called "smtps", but such usage was never # approved by the IANA and therefore conflicts with another, legitimate assignment. # For more details about managing secure SMTP connections with postfix, please see: # http://www.postfix.org/TLS_README.html # To read more about configuring secure connections with Outlook 2000, please read: # http://support.microsoft.com/default.aspx?scid=kb;en-us;Q307772 # Apple does not support the use of port 465 for this purpose. # After determining that connecting clients do require this behavior, you may choose # to manually enable support for these older clients by uncommenting the following # four lines. #465 inet n - n - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - n - - smtp pickup fifo n - n 60 1 pickup cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify sacl-cache unix - - n - 1 sacl-cache flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error retry unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants.

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

  • fink hangs while compiling Octave on OS X

    - by Mark Bennett
    Disclaimer: I'm totally new to Fink. I'm trying to install Octave (Matlab open source clone) on Mountain Lion using Fink, following instructions at http://wiki.octave.org/Octave_for_MacOS_X It's a new installation of Fink, and I've also installed X11 per instructions. I'm using this command (which I believe is correct since everything's 64 bit now): sudo fink install octave-atlas It's hanging after a while, showing this as it's last output: ... Setting up xft2-dev (2.2.0-2) ... Clearing dependency_libs of .la files being installed Reading buildlock packages... All buildlocks accounted for. /sw/bin/dpkg-lockwait -i /sw/fink/dists/stable/main/binary-darwin-x86_64/x11/xinitrc_1.5-1_darwin-x86_64.deb (Reading database ... 14871 files and directories currently installed.) Preparing to replace xinitrc 1.5-1 (using .../xinitrc_1.5-1_darwin-x86_64.deb) ... Unpacking replacement xinitrc ... Setting up xinitrc (1.5-1) ... I did notice the process name on the terminal's tab was "sort", so the second time I hit this I tried Control-D (End-of-File), and this did seem to unstick it. I'm wondering if there's some misformed command and sort was trying to read from stdin? Questions: 1: Has anybody else seen this? Google wasn't helpful 2: Fink outputs a LOT of warnings and errors.... is that normal? 3: wondering if anybody's got Fink to compile Octave on Mountain Lion specifically? And whether they used just "octave" or "octave-atlas". Or if you got it working with MacPorts or Homebrew? 4: later Fink failed with "Failed: phase compiling: gnuplot-minimal-4.6.1-1 failed". I haven't started googling that yet... but wondering if anybody's see that? Also tried MacPorts but got other errors with Octave. And reading online it looks like HomeBrew also has issues with Octave on Mountain Lion. 5: Generally looking for anybody who's got Octave running on Mountain Lion with any of the package managers. I've been at this for a couple days ;-)

    Read the article

  • apache mod_cache in v2.2 - enable cache based on url

    - by Janning
    We are using apache2.2 as a front-end server with application servers as reverse proxies behind apache. We are using mod_cache for some images and enabled it like this: <IfModule mod_disk_cache.c> CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache CacheIgnoreCacheControl On CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The image urls vary completely and have no common start pattern, but they all end in ".png". Thats why we used the root in CacheEnable / If not served from the cache, the request is forwarded to an application server via reverse proxy. So far so good, cache is working fine. But I really only need to cache all image request ending in ".png". My above configuration still works as my application server send an appropriate Cache-Control: no-cache header on the way back to apache. So most pages send a no-cache header back and they get not cached at all. My ".png" responses doesn't send a Cache-Control header so apache is only going to cache all urls with ".png". Fine. But when a new request enters apache, apache does not know that only .png requests should be considered, so every request is checking a file on disk (recorded with strace -e trace=file -p pid): [pid 19063] open("/var/cache/apache2/mod_disk_cache/zK/q8/Kd/g6OIv@woJRC_ba_A.header", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) I don't want to have apache going to disk every request, as the majority of requests are not cached at all. And we have up to 10.000 request/s at peak time. Sometimes our read IO wait spikes. It is not getting really slow, but we try to tweak it for better performance. In apache 2.4 you can say: <LocationMatch .png$> CacheEnable disk </LocationMatch> This is not possible in 2.2 and as I see no backports for debian I am not going to upgrade. So I tried to tweak apache2.2 to follow my rules: <IfModule mod_disk_cache.c> SetEnvIf Request_URI "\.png$" image RequestHeader unset Cache-Control RequestHeader append Cache-Control no-cache env=!image CacheEnable disk / CacheRoot /var/cache/apache2/mod_disk_cache #CacheIgnoreCacheControl on CacheMaxFileSize 2500000 CacheIgnoreURLSessionIdentifiers jsessionid CacheIgnoreHeaders Set-Cookie </IfModule> The idea is to let apache decide to serve request from cache based on Cache-control header (CacheIgnoreCacheControl default to off). And before simply set a RequestHeader based on the request. If it is not an image request, set a Cache-control header, so it should bypass the cache at all. This does not work, I guess because of late processing of RequestHeader directive, see https://httpd.apache.org/docs/2.2/mod/mod_headers.html#early I can't add early processing as "early" keyword can't be used together with a conditional "env=!image" I can't change the url requesting the images and I know there are of course other solutions. But I am only interested in configuring apache2.2 to reach my goal. Does anybody has an idea how to achieve my goal?

    Read the article

  • Why doesn't Apache start from xampp control panel after changes to vhosts config?

    - by Grafica
    I'm running xampp on my local server, and want to host multiple sites, so I changed the httpd-vhosts.conf file. Will somebody let me know if there is something wrong with my code? Apache was running while I had only one site in the config, but after I added another site, I stopped apache, and I'm not able to restart it. # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # ##NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> NameVirtualHost * <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName evamagnus.com <Directory "C:\xampp\htdocs\"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs2\" ServerName mygrafica.com <Directory "C:\xampp\htdocs2\"> Order allow,deny Allow from all </Directory> </VirtualHost> Here is what it says in the control panel: 2:17:37 PM [apache] Starting apache service... 2:17:38 PM [apache] Status change detected: running 2:17:39 PM [apache] Status change detected: stopped Thanks in advance.

    Read the article

  • Moving windows-2003 hdd into virtual machine - with HDD shrink

    - by jm666
    Before you vote to close as exact duplicate, please read the full question. I was already read: Can I make a virtual machine out of a Windows XP physical machine? Disk2vhd,convert my PC to Hyper-V Virtual Machine Creating a Windows Virtual PC image from a Physical machine physical machine to virtual machine and place into VirtualBox BSOD trying to migrate Windows XP from a physical to a virtual machine http://en.wikipedia.org/wiki/Physical-to-Virtual and all other similiar questions here and several external sites too Unfortunately, don't find answer for my problem. I have an physical machine with 500GB HDD, on what is installed old Windows-2003 server with one server application. The application is like the windows itself, too old, no support for it today, haven't installation media and so on.. ;( On the HDD it is used only approx. 100MB (maybe less when will delete all unnecessary files). Want convert the the machine into the VirtualBox, and the VirtualBox should run on the same machine. Is possible to do this with the next steps? I can attach another HDD (via USB or internally) Boot an live Linux from CD, mount HDDs Run "something" on the Linux (the above wikipedia article have many pointer for the SW) for the conversion and store the image on the USB HDD - unfortunately, many of tools uses some specialty what exists in Windows-XP and above. No informations about Windows-2003 server, so what is an working solution for Windows-2003? try boot the virtual image with VirtualBox when it will run ok, remove the old installation, install Linux on the old 500GB hdd, copy the image and run.. The above should works (i hope), but the problems: i currently have only 320GB external USB hdd. (ofc, i can remove it from a box and enter it as internal HDD too) so, for the conversion I looking for the on the fly HDD shrink, so while moving the physical 500GB HDD need shrink it into smaller HDD - as i told above, only 100MB is used Exists something for this? (free) - or the only way is buying and larger 1TB hdd and using it for the conversion? Another question are: is anybody have real experience with windows-2003 conversion into VirtualBox? Looking for an answer from someone who really doing it and can figure out real pitfalls. (googling can do myself). exists here better approach for the solution?

    Read the article

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • Downloading Python 2.5.4 (from official website) in order to install it

    - by brilliant
    I was quite hesitant about whether I should post this question here on "StackOverflow" or on "SuperUser", but finally decided to post it here as Python is more a programming language rather than a piece of software. I've been recently using Python 2.5.4 that is installed on my computer, but at the moment I am not at home (and won't be for about two weeks from now), so I need to install the same version of Python on another computer. This computer has Windows XP installed – just like the one that I have at home. The reason why I need Python 2.5.4 is because I am using “Google App Engine”, and I was told that it only supports Python 2.5 However, when I went to the official Python page for the download, I discovered that certain things have changed, and I don’t quite remember where exactly from that site I had downloaded Python 2.5.4 on my computer at home. I found this page: http://www.python.org/download/releases/2.5.4/ Here is how it looks: (If you can’t see it here, please check it out at this address: http://brad.cwahi.net/some_pictures/python_page.jpg ) A few things here are not clear to me. It says: For x86 processors: python-2.5.4.msi For Win64-Itanium users: python-2.5.4.ia64.msi For Win64-AMD64 users: python-2.5.4.amd64.msi First of all, I don’t know what processor I am using – whether mine is “x86” or not; and also, I don’t know whether I am an “Win64-Itanium” or an “Win64-AMD64” user. Are Itanium and AMD64 also processors? Later it says: Windows XP and later already have MSI; many older machines will already have MSI installed. I guess, it is my case, but then I am totally puzzled as to which link I should click as it seems now that I don’t need those three previous links (as MSI is already installed on Windows XP), but there is no fourth link provided for those who use “Windows XP” or older machines. Of course, there are these words after that: Windows users may also be interested in Mark Hammond's win32all package, available from Sourceforge. but it seems to me that it is something additional rather than the main file. So, my question is simple: Where in the official Python website I can download Python 2.5.4, precisely, which link I should click?

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • Parsing xml files locally from assets folder using XmlPullParser

    - by Randolphg
    Im trying to parse a local xml file that I place in my assets folder. I've been trying to do this for almost a week now. Here is my test xml file Test1 Test2 Test3 Test4 Test5 I keep getting the same error: W/System.err(22458): org.xmlpull.v1.XmlPullParserException: unexpected type (position:TEXT Code: public void xmlParser() throws XmlPullParserException, IOException, ParserConfigurationException, SAXException { Log.d("tag", "xmlParsing...."); Arithmetic arthm = new Arithmetic(); XmlPullParserFactory xmlPF = XmlPullParserFactory.newInstance(); xmlPF.setValidating(false); XmlPullParser xml = xmlPF.newPullParser(); InputStream raw = getApplication().getAssets().open("menu.xml"); xml.setInput(raw, null); xml.nextTag(); Log.d("tag", "start parsing...."); String elementText = null; String elemName = null; int nofTags = 0; while (xml.getEventType() != XmlPullParser.END_DOCUMENT) { Log.d("tag", "while(xml.next)..."); switch (xml.getEventType()) { case XmlPullParser.START_DOCUMENT: Log.d("tag", "while (xml.getEventType() != XmlPullParser.END_DOCUMENT)"); break; case XmlPullParser.START_TAG: Log.d("tag", " case XmlPullParser.START_TAG"); elementText = xml.getName(); Log.d("tag", "elementText = " + elementText); if (xml.getEventType() != XmlPullParser.END_TAG) { xml.nextTag(); } break; case XmlPullParser.TEXT: Log.d("tag", "case TEXT"); if (elementText.equals("menu") && xml.isWhitespace()) { Log.d("tag", "<" + elementText + ">"); arthm.menu_name = xml.getText(); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("item")) { arthm.description = xml.getText(); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("SUBCATEGORY NAME")) { arthm.subcategoryDesc.add(xml.getText()); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("SUBCATEGORY DESC")) { arthm.subcategoryName.add(xml.getText()); Log.d("tag", "value " + xml.getText() + " added"); } break; case XmlPullParser.END_TAG: Log.d("tag", "case END_TAG"); nofTags += 1; String tags = Integer.toString(nofTags); Log.d("tags", elementText + " number of tags" + tags); if (xml.nextTag() != XmlPullParser.START_TAG) { xml.next(); } break; case XmlPullParser.END_DOCUMENT: Log.d("tag", "case END_DOCUMENT"); break; default: break; } } Log.d("tag", "Success!"); } Thanks in advance.

    Read the article

  • Installing ImageMagick on Mac OSX 10.6

    - by Russell C.
    I just got a new Mac and am trying to setup a local Perl development environment. I'm using MAMP but also need the ImageMagick perl module installed in order to do some of the photo processing our scripts require. I tried installing ImageMagick manually but ran into some issues and after reading online a lot of people reported having issues going this route. The general consensus was to install it using MacPorts instead so I went ahead and installed MacPorts. Unfortunately, MacPorts can't seem to install it successfully either. Here is the command I'm using to try to install ImageMagick: sudo port install p5-perlmagick And here are all the errors reported during install: ---> Computing dependencies for p5-perlmagick ---> Building p5-perlmagick Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-perlmagick/work/PerlMagick-6.32" && /usr/bin/make -j2 all " returned error 2 Command output: Magick.xs:10918: error: 'struct Methods' has no member named 'exception' Magick.xs:10918: error: request for member 'severity' in something not a structure or union Magick.xs:10918: error: 'ErrorException' undeclared (first use in this function) Magick.xs:10919: error: 'struct Methods' has no member named 'exception' Magick.xs:10920: warning: implicit declaration of function 'GetImageException' Magick.xs:10922: error: 'struct PackageInfo' has no member named 'image_info' Magick.xs:10922: error: 'struct Methods' has no member named 'adjoin' Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: 'UndefinedException' undeclared (first use in this function) Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: passing argument 2 of 'Perl_sv_catpv' from incompatible pointer type Magick.xs:10929: warning: unused variable 'message' Magick.xs:10856: warning: unused variable 'filename' Magick.c:10784: warning: unused variable 'ref' Magick.c:10777: warning: unused variable 'ix' Magick.xs: In function 'boot_Image__Magick': Magick.xs:2122: warning: implicit declaration of function 'InitializeMagick' Magick.xs:2123: warning: implicit declaration of function 'SetWarningHandler' Magick.xs:2124: warning: implicit declaration of function 'SetErrorHandler' make: *** [Magick.o] Error 1 Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. I have no idea what the problem might be or how to go about successfully installing ImageMagick. I'd appreciate any help & advice that someone out there that has done this successfully might be able to provide. Thanks in advance!

    Read the article

  • configure squid3 to set up a web proxy in ubuntu12.04

    - by Gnijuohz
    I am in a LAN and have to use a proxy given to access the web in a very limited way. I can't even use google, github.com or SE sites. However I can use ssh to log into a server, which I have root access so basically I can do anything I want with it. So I was thinking that maybe I could use that server as a proxy so I can visit sites through it. I tested it using ssh -vT [email protected] which gave a proper response. And In my computer I can't do this. Also I tried downloading something from the gun.org using wget, which can't be done in my computer too. And it succeeded on that server. I don't know if that's enough to say that this server have full access to the Internet. But I assumed so and I installed squid3 on it. After trying some while, I failed to get it working. I got this after I run squid3 -k parse 2012/07/06 21:45:18| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/07/06 21:45:18| Processing: acl manager proto cache_object 2012/07/06 21:45:18| Processing: acl localhost src 127.0.0.1/32 ::1 2012/07/06 21:45:18| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/07/06 21:45:18| Processing: acl localnet src 10.1.0.0/16 # RFC1918 possible internal network 2012/07/06 21:45:18| Processing: acl SSL_ports port 443 2012/07/06 21:45:18| Processing: acl Safe_ports port 80 # http 2012/07/06 21:45:18| Processing: acl Safe_ports port 21 # ftp 2012/07/06 21:45:18| Processing: acl Safe_ports port 443 # https 2012/07/06 21:45:18| Processing: acl Safe_ports port 70 # gopher 2012/07/06 21:45:18| Processing: acl Safe_ports port 210 # wais 2012/07/06 21:45:18| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/07/06 21:45:18| Processing: acl Safe_ports port 280 # http-mgmt 2012/07/06 21:45:18| Processing: acl Safe_ports port 488 # gss-http 2012/07/06 21:45:18| Processing: acl Safe_ports port 591 # filemaker 2012/07/06 21:45:18| Processing: acl Safe_ports port 777 # multiling http 2012/07/06 21:45:18| Processing: acl CONNECT method CONNECT 2012/07/06 21:45:18| Processing: http_port 3128 transparent vhost vport 2012/07/06 21:45:18| Starting Authentication on port [::]:3128 2012/07/06 21:45:18| Disabling Authentication on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Disabling IPv6 on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Processing: cache_mem 1000 MB 2012/07/06 21:45:18| Processing: cache_swap_low 90 2012/07/06 21:45:18| Processing: coredump_dir /var/spool/squid3 2012/07/06 21:45:18| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/07/06 21:45:18| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/07/06 21:45:18| Processing: refresh_pattern -i (/cgi-bin/|?) 0 0% 0 2012/07/06 21:45:18| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/07/06 21:45:18| Processing: refresh_pattern . 0 20% 4320 2012/07/06 21:45:18| Processing: ipcache_high 95 2012/07/06 21:45:18| Processing: http_access allow all I deleted some allow and deny rules and added http_access allow all so that all the request would be allowed. After configuring my computer, I got this error: Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. And the log in the server showed that my TCP requests had all been denied. So, first of all, is what I am trying to do achievable? If so, how to configure the squid in the server so that I use it as a proxy to surf the Internet? My computer and the server both run Ubuntu11.04. Thanks for any help~

    Read the article

  • Detach current session and attach to another session, done with one script, can I?

    - by Jimm Chen
    After reading the vague official doc of GNU screen( http://www.gnu.org/software/screen/manual/screen.html ) and asking quite some questions at this site. I still cannot figure out how to accomplish such a task with a shell script. This task costs some words to describe. Assume I'm using PuTTY to telnet into my Linux server. ?STEP 1? Launch 2 telnet connections . From putty window 1 (PTWIN1),telnet into Linux Bash shell, execute screen -RR to launch a screen session, and get session name 21385.pts-4.linux-ic37 . From putty window 2 (PTWIN2), do that same as in PTWIN1, but this time, I get session name 22041.pts-9.linux-ic37 . Now, we have two screen sessions running simultaneously. We can check this: $ screen -ls There are screens on: 22041.pts-9.linux-ic37 (Attached) 21385.pts-4.linux-ic37 (Attached) 2 Sockets in /var/run/uscreens/S-chj2. ?STEP 2? Assume that for some reason, PTWIN1's TCP connection is lost abnormally(but server doesn't know that), and an urgent work is pending on session 21385 and I want to quickly regain control of it. Fortunately, we know the 21385 session is still there, so, I want to have PTWIN2 attach to session 21385. Because I hate to remember the esoteric screen option all the time, so I decide to write a script called sttach. I hope that sttach 21385.pts-4.linux-ic37 can let me attach to session 21385(for PTWIN2). Now, let's say sttach works well and I take control of 21385 on PTWIN2. ?STEP 3? Some minutes later. I want to go back to work on session 22041. Here, please allow me to have PTWIN2 remain associated with session 21385. What I would like to do is to launch another putty window (PTWIN3), telnet into server, and execute sttach 22041.pts-9.linux-ic37 in hope that I can resume session 22041 on PTWIN3 . You can see the benefit of sttach: as long as I know the target session name, I can call it to have my PuTTY window switch to that session, regardless whether the target session is "(Attached)" or "(Detached)", and regardless whether the running context is inside a screen session or not. Now the question: How to write the (Bash) script sttach? I mean, run screen with appropriate options in sttach to accomplish the goal. Waiting for your kind answer. Thank you. My previous questions regarding GNU screen: GNU screen, how to get current sessionname programmatically Is it possible to change GNU screen session name after created? How do I know I'm running inside a linux "screen" or not? My env: openSUSE Linux 11.3, GNU screen 4.00.03 (FAU) 23-Oct-06

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Restart mysql keeping the data

    - by sitonico
    I'm quite new using mysql, so let me know if I'm missing something. I took some holidays, and when I got back to work and I tried to log in phpmyadmin I got a ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I never had this problem, so I was browsing to look for a solution. I tried some things, and I'm afraid I touched too much. I couldn't solve the problem, and the I realized that I had some actualizations to be done, and I thought that they may be helpful for mysql. Then I also realized that when I was doing this actualizations first day, they stopped because I had a lack of space, so I restarted then. Then,when the system was configuring mysql, it didn't advance. I waited for a long time and then I just stopped it and restarted the computer. After it, I just tried to uninstall mysql with sudo apt-get remove mysql-server-5.1, and install it again, but it didn't work. Now I have 2 questions: What do you think it is happening? Should I remove mysql completely? What should I do? I'm afraid of losing my databases, is there anyway to recover the data? Thank you very much in advance. -----------EDIT------- These are the messages: alfonso@alfonso-laptop:/$ tail -F /var/log/syslog | grep Feb 15 15:08:01 alfonso-laptop init: mysql post-start process (15192) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process (15263) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process ended, Feb 15 15:08:31 alfonso-laptop init: mysql post-start process (15264) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process (15358) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process ended, Feb 15 15:09:01 alfonso-laptop init: mysql post-start process (15359) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process (15447) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process ended, Feb 15 15:09:32 alfonso-laptop init: mysql post-start process (15448) terminated with status 1 This is the content of error.log-old 110128 13:17:20 [Note] /usr/sbin/mysqld: Normal shutdown 110128 13:17:20 [Note] Event Scheduler: Purging the queue. 0 events 110128 13:17:20 InnoDB: Starting shutdown... 110128 13:17:22 InnoDB: Shutdown completed; log sequence number 0 590872 110128 13:17:22 [Note] /usr/sbin/mysqld: Shutdown complete 110214 2:08:18 [Note] Plugin 'FEDERATED' is disabled. 110214 2:08:19 InnoDB: Started; log sequence number 0 590872 110214 2:08:19 [Note] Event Scheduler: Loaded 0 events 110214 2:08:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.8' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) -- Some links of similar problems https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/573318 http://www.linuxquestions.org/questions/linux-newbie-8/lamp-install-on-lucid-mysqld-sock-missing-mysql-terminating-status%3D1-853152/ It seems it's a permissions problem... But I don't know which permissions I should change... SOLVED -- mysql error 2002 "cannot connect to socket"

    Read the article

  • Server freezes at XX:25

    - by Karevan
    We've ordered a 50 euro/month server on hetzner.de, it has debian OS. The problem is that server is freezing in random time of the day and nothing appears in log. Only hardware reboot helps. Part of the log file while it was freezing: Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] New connection from 95.211.120.220 Aug 17 22:38:26 Debian-60-squeeze-64-minimal pure-ftpd: ([email protected]) [INFO] Logout. Aug 17 22:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22828]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:09:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22835]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 17 23:17:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22842]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 17 23:39:01 Debian-60-squeeze-64-minimal /USR/SBIN/CRON[22847]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: imklog 4.6.4, log source = /proc/kmsg started. Aug 18 09:47:47 Debian-60-squeeze-64-minimal rsyslogd: [origin software="rsyslogd" swVersion="4.6.4" x-pid="1229" x-info="http://www.rsyslog.com"] (re)start Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpuset Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Initializing cgroup subsys cpu Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Linux version 2.6.32-5-amd64 (Debian 2.6.32-45) ([email protected]) (gcc version 4.3.5 (Debian 4.3.5$ Aug 18 09:47:47 Debian-60-squeeze-64-minimal kernel: [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro As you can see, it appears only the fact of starting. No. theres no way to look in server's console right after when it freezes, sadly. Datacenter supporters do not really want to help about that. Server has been installed 30th july, times and dates of freezes are down there: 6 august, 0:25 18 august, 2:27 21 august, 1:25 26 august, 23:26. We decided that freezing around ??:25 isn't a hardware fault, and decided to reinstall the OS. Later, 31 august, our admin backed up all files, reinstalled Debian, and restored the backup. But then, 7 september, server went down again, at 5:05. We thought it was related to Anyone else experiencing high rates of Linux server crashes during a leap second day? and turned ntp off. But then the server went down twice again, 21 september, 17:29 and 24 september, 20:27. I called all linux admins I knew to help with solving it and they said everything is fine about configuring OS and it could be hardware only. But they dont know why it always freezes at XX:25-30. Maybe some of you know about something related to that?

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >