Search Results

Search found 14079 results on 564 pages for 'folding at home'.

Page 517/564 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Mono through FastCGI on nginx

    - by Stijn
    I'm going through http://www.mono-project.com/FastCGI_Nginx and can't get it to work. The FastCGI server seems to be running. The following is from the error log: upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 192.168.1.125, server: arch, request: "GET /Default.aspx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "arch" Command used to start the server (I've tried server2 and server4, using a simple .NET 2.0 or .NET 4.0 project): fastcgi-mono-server2 /applications=arch:/:/var/www/test/public/ /socket=tcp:127.0.0.1:9000 /stopable=True nginx config: server { listen 80; server_name arch; access_log /var/www/test/log/access.log; error_log /var/www/test/log/error.log; location / { root /var/www/test/public; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Using xsp4 works fine, I can browse the site. I've enabled FastCGI logging, this is the output: [2012-04-15 23:51:18Z] Debug Accepting an incoming connection. [2012-04-15 23:51:18Z] Notice Beginning to receive records on connection. [2012-04-15 23:51:18Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 386) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-04-15 23:51:18Z] Debug Read parameter. (PATH_INFO = ) [2012-04-15 23:51:18Z] Debug Read parameter. (SCRIPT_FILENAME = /var/www/test/public/Home) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_HOST = arch) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-gb,en;q=0.5) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=2C3D702C9B0F23F69B80820B) [2012-04-15 23:51:18Z] Error Failed to process connection. Reason: Argument cannot be null. Parameter name: s [2012-04-15 23:51:18Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug The FastCGI connection has been closed.

    Read the article

  • Can I set up two computers up with the same monitors/keyboard/mouse in a modular way?

    - by CodeJunkie
    I have a desktop computer computer (running Windows 7), and a laptop (running OSX Mountain Lion, and maybe Ubuntu 12 eventually). When the laptop is at home, I want both the desktop and the laptop to use the same (2+) monitors, the keyboard, and the mouse (or mice, if I add a track pad). I know about KVM switches, but I want something more complicated. I like to use Synergy to use both computers with one keyboard and mouse at the same time. Synergy requires that the keyboard and mouse be connected to one computer (the server), which shares them with other computers (clients) over wifi. the issue is that when one computer isn't logged in, Synergy doesn't work on it. Sometimes, I want my laptop to be the server (physically connected to the keyboard and mouse), and sometimes I want my desktop to be the server. This means that I need the keyboard/mouse/other USB devices to be able to switch computers without me playing musical plugs. To complicate things further, I don't always want the same desktop set up in terms of monitors. Sometimes, I want the desktop to have both monitors. Other times, I want the laptop to control both monitors. Sometimes I want the desktop to control one monitor, and the laptop to control the other. In any case, the keyboard and mouse need to be able to be physically connected to either computer without lots of fussing with plugs. This breaks down to at least this set of possible combinations: Desktop controls both monitors, and has a physical connection to keyboard and mouse Laptop controls both monitors, and has a physical connection to keyboard and mouse Desktop and laptop each control a monitor, but the desktop has a physical connection to the keyboard and mouse (which it shares with the laptop via wifi) Desktop and laptop each control a monitor, but the laptop has a physical connection to the keyboard an mouse (which it shares with the desktop via wifi) some usb devices connected via a usb hub need to be able to switch physical connection between computers, ideally without the keyboard and mouse switching computer connection There may be other combinations, but these are the main ones at the moment. Basically, I need a KVM switch which allows me to switch individual monitors/keyboard/mouse/usb hub between computers independently of each other, or a better solution. How can I set two computers up with the same monitors/mice/keyboard/usb hub without having to switch everything to one computer or the other all at the same time?

    Read the article

  • Allow anonymous upload for Vsftpd?

    - by user15318
    I need a basic FTP server on Linux (CentOS 5.5) without any security measure, since the server and the clients are located on a test LAN, not connected to the rest of the network, which itself uses non-routable IP's behind a NAT firewall with no incoming access to FTP. Some people recommend Vsftpd over PureFTPd or ProFTPd. No matter what I try, I can't get it to allow an anonymous user (ie. logging as "ftp" or "anonymous" and typing any string as password) to upload a file: # yum install vsftpd # mkdir /var/ftp/pub/upload # cat vsftpd.conf listen=YES anonymous_enable=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root #directory was created by root, hence owned by root.root anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #chroot_local_user=NO #chroot_list_enable=YES #chroot_list_file=/etc/vsftpd.chroot_list chown_uploads=YES When I log on from a client, here's what I get: 500 OOPS: cannot change directory:/var/ftp/pub/incoming I also tried "# chmod 777 /var/ftp/incoming/", but get the same error. Does someone know how to configure Vsftpd with minimum security? Thank you. Edit: SELinux is disabled and here are the file permissions: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0 # sestatus SELinux status: disabled # getenforce Disabled # grep ftp /etc/passwd ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin # ll /var/ drwxr-xr-x 4 root root 4096 Mar 14 10:53 ftp # ll /var/ftp/ drwxrwxrwx 2 ftp ftp 4096 Mar 14 10:53 incoming drwxr-xr-x 3 ftp ftp 4096 Mar 14 11:29 pub Edit: latest vsftpd.conf: listen=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root anonymous_enable=YES anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #500 OOPS: bad bool value in config file for: chown_uploads chown_uploads=YES chown_username=ftp Edit: with trailing space removed from "chown_uploads", err 500 is solved, but anonymous still doesn't work: client> ./ftp server Connected to server. 220 (vsFTPd 2.0.5) Name (server:root): ftp 331 Please specify the password. Password: 500 OOPS: cannot change directory:/var/ftp/pub/incoming Login failed. ftp> bye With user "ftp" listed in /etc/passwd with home directory set to "/var/ftp" and access rights to /var/ftp set to "drwxr-xr-x" and /var/ftp/incoming to "drwxrwxrwx"...could it be due to PAM maybe? I don't find any FTP log file in /var/log to investigate. Edit: Here's a working configuration to let ftp/anonymous connect and upload files to /var/ftp: listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES

    Read the article

  • Why can't I play DVDs on Windows 8 Pro with Media Center Pack?

    - by ligos
    I have a laptop with Windows 8 Pro with Media Center (64 bit), but neither Media Player or Media Center can play DVDs. Have I done something wrong? Did the Feature Pack not install correctly? Should this work? Can I somehow uninstall and reinstall the Media Pack? Details So I upgraded by Windows 7 Home Premium laptop to Windows 8 Pro based on Microsoft's low pricing. I also grabbed my free upgrade to Media Pack and followed the instructions on that page to add my feature pack. Alas! I still cannot play DVDs via either Media Center or Player. Various Context Thinking I might need to re-install the pack, I found that I could no longer add any more feature packs (searching for add features settings only shows Turn Windows Features On and Off). Media Centre and Media Player are both enabled in Windows Features. I cannot see any way to remove or downgrade from the Media Pack, nor to add any more feature packs. I installed a codec pack (32bit) from Shark007, which has not allowed me to play DVDs (although did allow me to play various other media files). Media Player can play DTV recorded on another Windows 7 box, but Media Center cannot. VLC plays DVDs OK, but I'd prefer to figure out what the root cause of this problem is. There were no errors or other indications that the Media Pack failed to install; the installation itself was quite smooth. Although I have not checked my event log in detail. Before upgrading to Windows 7, I could play DVDs OK. Screenshots System Information, showing I have Windows 8 Pro with Media Center When playing a DVD, Media Player gives and error: The selected file has an extension that is not recognised by windows... When you click Yes, it fails saying: Windows Media Player cannot find the file... Media Center says: The file type is not recognisd and cannot be played, along with some codec related stuff. I can browse the files OK via My Computer on any video DVD.

    Read the article

  • Windows explorer locks files

    - by John Prince
    I'm using Office 2010 & Windows 7 Home Premium 64-bit. My problem starts when I attempt to save e-mail messages to my PC that I have received via Outlook (my ISP is Comcast). I'm using the default .msg file extension option when I attempt to save these e-mails. The resultant files are locked and do not show the normal "envelope" icon. Instead, it’s a “blank page” icon with the right upper corner folded in. These files refuse to open either by double clicking on them or right clicking and trying to open them with Outlook. And when I return to Outlook, I discover that Outlook is now hung up and I have to close it via the Task Manager. To make matters worse, I’ve also discovered that every e-mail message that I've saved on my PC over the years has also somehow become locked and their original "envelope" icon has been replaced with the "blank page" icon. I found and installed an application called LockHunter. As a result, when I right click on a saved and locked e-mail message, I’ve given an option to find out what's locking it. Each time I'm told that the culprit is Windows explorer.exe. When I unlock the file the normal envelope icon is sometimes displayed (but not always) but at least the file can then be opened. But the file is still “squirrely” as it can’t be moved or saved to a folder until it’s unlocked again. On this second attempt, LockHunter says it’s now locked by Outlook.exe. By the way, I don't have this issue when I save Word, Excel & PowerPoint files; only with Outlook. I've exhausted every remedy that I can think of including: making sure that the file and folder options are checked to always show icons and not thumbnails; running the Windows 7 & Office 2010 repair options which find nothing amiss; running a complete system scan with Windows Malicious Software Removal Tool with negative results; verifying that Outlook is the default for opening e-mails; updating all of my applications via Secunia Personal Software Inspector; uninstalling every application that I felt was unnecessary; doing a registry cleanup via CC Cleaner; having Windows Security Essentials always on (it did find one Java Trojan recently which was quarantined and then deleted); uninstalling a bunch of non-Microsoft shell extensions; and deactivating all of the Outlook Add-ins and then re-activating each one. None of this solves the problem. I’d welcome any advice on how to resolve this.

    Read the article

  • How to transition to Comcast with static IP address

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • DSL Modem with Wireless Router

    - by David
    I have a D-Link WBR-1310 wireless router and a TP-Link TD-8616 DSL modem. My old DSL modem died recently and I got the TP-Link as a replacement. With my old DSL modem, I plugged it into the WAN port on my D-Link and I could reach the internet through wireless and through the network. However, when I plugged the new TP-Link into the WAN port, I was not able to get any internet connectivity (either on the network ports or through wireless). So I plugged my labtop directly into the TP-Link DSL modem and I was able to get internet connectivity. I'm trying to figure out why my labtop can see the internet connection, but not the D-Link router. I think that the problem is due to the IP networking. My D-Link was originally set to have IP address 192.168.1.1. According to the documentation for the TP-Link DSL modem, it uses 192.168.1.1 as its IP address. I do not believe that my old DSL modem had an IP address. I logged into my D-Link router and changed its IP address to 192.168.1.2 and restarted it. Unfortunately, I still could not see the internet from my wireless devices. I've read a few forum postings which implied that I needed to setup a "bridge" between the two networks. Does that sound correct? Why didn't my old DSL modem require a bridge? I read pg. 12-13 of my D-Link's manual and they suggest that I need to disable UPnP, DHCP, and then plug the DSL modem into one of the LAN ports on my router. I'm concerned about doing this since I don't think that the firewall will work if I plug my DSL modem into one of the LAN ports. I also have a home NAS on my network and I wouldn't want that to be available over the internet. Does anyone have any advice about how I can get my TPLink DSL modem to work with my D-Link router? Thanks!

    Read the article

  • Cannot exclude a path from basic auth when using a front controller script

    - by Adam Monsen
    I have a small PHP/Apache2 web application wherein I'd like to do two seemingly incompatible operations: Route all requests through a single PHP script (a "front controller", if you will) Secure everything except API calls with HTTP basic authentication I can satisfy either requirement just fine in isolation, it's when I try to do both at once that I am blocked. For no good reason I'm trying to accomplish these requirements solely with Apache configuration. Here are the requirements stated as an example. A GET request for this URL: http://basic/api/listcars?max=10 should be sent through front.php without requiring basic auth. front.php will get /api/listcars?max=10 and do whatever it needs to with that. Here's what I think should work. In my /etc/hosts I added 127.0.0.1 basic and I am using this Apache config: <Location /> AuthType Basic AuthName "Home Secure" AuthUserFile /etc/apache2/passwords require valid-user </Location> <VirtualHost *:80> ServerName basic DocumentRoot /var/www/basic <Directory /var/www/basic> <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /front.php/$1 [QSA,L] </IfModule> </Directory> <Location /api> Order deny,allow Allow from all Satisfy any </Location> </VirtualHost> But I still always get a HTTP 401: Authorization Required response. I can make it work by changing <Location /api> into <Location ~ /api> but this allows more than I want to past basic auth. I also tried changing the <Directory /var/www/basic> section into <Location />, but this doesn't work either (and it results in some strange values for PATH_TRANSLATED being passed to the script). I searched around and found many examples of selective exclusion of basic auth, but none that also incorporated a front controller. I could certainly do something like handle basic auth in the front controller, but if I can have Apache do that instead I'll be able to keep all authentication logic out of my PHP code. A friend suggested splitting this into two vhosts, which I know also works. This used to be two separate vhosts, actually. I'm using Apache 2.2.22 / PHP 5.3.10 on Ubuntu 12.04.

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • Local, Multiple-Blog (ie Dashboard) Blogging Software as Alternative to Blogger [closed]

    - by Synetech inc.
    FOR RE-OPENING: I don’t see how it is “too localized”. Plenty of people like to run their own web-apps instead of relying on third-party services. If that were not true, then WordPress, phpBB, Apache, PHP, etc. would not be available for general use. As for “Internet audience at large”, I must have missed the part where it was a rule that you are only allowed to ask for help for things that applies to everyone else too; I thought you were allowed to ask for help. Besides, if someone knows of software that fulfills the question, then it is relevant to whomever would download it, and so is not only applicable to an “extraordinarily narrow situation”. (Besides, the reason that I was asking was because Google had announced that it was discontinuing FTP support for Blogger and so many people were affected—read NOT TOO LOCALIZED—and were trying to find alternatives.) Hi, I am trying to find software (for Windows, PHP, MySQL/SQLite/flat, free, open-source) to localize all of my software and service so that I can keep my files and host when needed from my own system instead of some remote computer. I’ve already selected things like web, FTP, and db servers. I’ve chosen forum and wiki software, as well as an RCS system. At this point, all I’m still looking for—actually, I still need to choose bug-tracking software, but besides that—is blogging software. I still use Blogger and am trying to find something that I can use to import my Blogger stuff and store on (and publish to) my home system. I have read of various blogging software including WordPress, MovableType, and TextPattern. The problem is that I am trying to find something that is like Blogger (which from what I can tell is not available on Google Code as open-source). What I specifically need is multiple-blog support. That is, multiple blogs ala the Blogger Dashboard, not multiple user accounts (although that is important as well). The closest thing that I have been able to find is using Wordpress categories to simulate multiple blogs, but that’s not really what I want. I want software that I can run locally that has a multi-blog dashboard like Blogger. Any ideas? Thanks a lot!

    Read the article

  • Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca

    - by Marlen T. B.
    I have recently (as of July 2012) bought a HP Pavilion dv6-6c40ca laptop. It came pre-installed with Windows 7 on an MBR. I installed Ubuntu 12.04 on it on a GPT partition in what I think is BIOS emulation mode. I made a BIOS-Grub partition so the install didn't fail. That is what it is for .. right? Now I want to upgrade to UEFI mode. How would I Install Ubuntu 12.04 in UEFI mode on a HP Pavilion dv6-6c40ca. Or is it impossible? My laptop, despite its new age may not be UEFI 2.0+ capable. If it isn't how can I install a software UEFI (i.e. a DUET such as the one by tianocore). Or is this too impossible? A link to my laptop's specs is: http://h10025.www1.hp.com/ewfrf/wc/document?docname=c03137924&tmp_task=prodinfoCategory&cc=ca&dlc=en&lang=en&lc=en&product=5218530 My laptop should have a UEFI given this link from HP http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c01442956#N218. And from the link I draw a quote: That means most notebooks distributed with Windows Vista, and all notebooks distributed with Windows 7, have the UEFI environment. My laptop had Windows 7 Home Premium pre-installed. OK. Following the comments so far -- NOTE: I am trying to do this on an external drive so I can see if it works. I have partitioned the drive using GParted as a GPT drive. Created a 200MB partition at the beginning of the drive with a FAT32 file system. Given the 200MB partition a label of "EFI". Set the boot flag on the 200MB partition. What should a do next to install Ubuntu 12.04? Given the link: https://help.ubuntu.com/community/UEFIBooting#Selecting_the_.28U.29EFI_Graphic_Protocol In my first read through (just to see if I will understand everything before I start) I get to step 2.3 Install GRUB2 in (U)EFI systems The first line is Boot into Linux (any live ISO) preferably in UEFI mode. Um .. how do you tell what mode your live CD is in?! And how do you change it if the mode is wrong?

    Read the article

  • enabling gzip with htaccess...why is it hit or miss?

    - by adam-asdf
    I have shared hosting through Justhost. I use the HTML5 Boilerplate .htaccess (have tried other methods from here and there without luck) the compression part is as follows: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # Compress all output labeled with one of the following MIME-types <IfModule mod_filter.c> AddOutputFilterByType DEFLATE application/atom+xml \ application/javascript \ application/json \ application/rss+xml \ application/vnd.ms-fontobject \ application/x-font-ttf \ application/xhtml+xml \ application/xml \ font/opentype \ image/svg+xml \ image/x-icon \ text/css \ text/html \ text/plain \ text/x-component \ text/xml </IfModule> </IfModule> However, it isn't working—at least I don't think—My home page (html) isn't compressing, the CSS and some of the JS aren't gzipped. It is failing on HTML, CSS and JS. However, some things are (or were, who knows what it will look like when you check) gzipped. My domain is http://adaminfinitum.com/ What is weird is that the (Google) PageSpeed browser extension for Firefox (whatever the current version is [Nov. 2012]) gives me a 95% speed rating (and no warnings about compression), yet YSlow and Chrome developer tools both flag me about gzip, as does a tool I found on here while researching this. To reduce cookies I set up a subdomain on my site and I thought maybe that was it so I added an .htaccess there also, but no luck. To reduce http requests I embedded some of webfonts and images in CSS (HTML5 BP stipulates not to compress images, and apparently '.woff' files are already compressed) so I thought maybe that was it and I spent all day separating and asynchronously loading those portions (via Modernizr.load) but that hasn't helped either...if anything it made it worse due to increasing http requests (I realize speed scores of async resources may be misleading). Researching this, it seems to be a fairly common issue but I haven't found an explanation/solution. I don't think it is a MIME-type issue, I have quadruple checked (and thrice edited) my .htaccess files. My hosting company said they run Apache 2.2.22 and I have looked at everything I can find. What gives?

    Read the article

  • How to transition to Comcast with static IP address [migrated]

    - by steveha
    I have my own email server in my house, on a static IP address. I have had business DSL for over a decade, but I also now have Comcast business Internet. I want to transition from the DSL to the Comcast, and I have some questions. I have a domain name, my own mail server, and a firewall (a PC with two network interfaces, running Devil-Linux). I need to make sure I understand how to set up the Comcast cable box, and how to set up my firewall. First, do I need to change any settings in the cable box? Currently I have only used the cable box by plugging in a laptop, with the laptop doing DHCP. I think I can leave the box alone but I would like to make sure. Second, I'm not sure I understand the instructions Comcast gave me for setting up the firewall. My DSL provider gave me the following information: static IP address, net mask, gateway, and two DNS servers. Comcast gave me: static IP address, routable static IP address, net mask, and two DNS servers, and told me to put the "static IP address" as the "gateway" on the firewall. Is this just Comcast-speak here? Does "routable static IP address" mean the same thing as "static IP address" in my DSL setup, the end-point address that I should publish in the DNS MX records for my email server? Or should I publish the "static IP address", and Comcast will then route all its traffic over the cable box? My plan is: first, I'm going to configure another firewall, so I have one firewall for the DSL and one for the Comcast (rather than madly editing settings to switch back and forth). Then I will publish the new Comcast static IP address as a backup email server address in the DNS MX records, wait a while to let it propagate, and then switch my home over from the DSL to the Comcast. Then I'll change DNS to make that the primary mail address and the DSL the secondary, let that go a while and make sure it seems reliable. Then I'll remove the DSL from the DNS MX records completely, and finally shut down the DSL service. (I thought about keeping the DSL as a backup, but the reason I'm leaving DSL is that it has become unreliable; and I have heard that Comcast business Internet is reliable.) Final question, any advice for me? Anything you think might be useful, helpful, or educational. Thanks.

    Read the article

  • dnsmasq acts as the DHCP server for selected nodes overriding the existing DHCP server on the same LAN?

    - by user183394
    I am trying to set up a small "lab" at home. Like many modern homes, I have a regular DSL service which comes with a 2Wire 3600HGV router, which acts also as a DHCP server. Since I would like to PXE boot a few computers in my "lab" The 2Wire is inflexible to adjustments that I want to do I have used dnsmasq at work so I would like to use dnsmasq as the DHCP server for the few nodes in my "lab" if feasible. In the dnsmasq man page, there is the following: [...] -K, --dhcp-authoritative (IPv4 only) Should be set when dnsmasq is definitely the only DHCP server on a network. It changes the behaviour from strict RFC compliance so that DHCP requests on unknown leases from unknown hosts are not ignored. This allows new hosts to get a lease without a tedious timeout under all circumstances. It also allows dnsmasq to rebuild its lease database without each client needing to reacquire a lease, if the database is lost. [...] As far as I know, the ISC DHCP server can use the following to do what I would like to accomplish: authoritative; [...] subnet 192.168.1.0 netmask 255.255.255.0 { host nb0 { # only give DHCP information to this computer: hardware ethernet e8:9a:8f:17:70:42; fixed-address 192.168.1.10; option subnet-mask 255.255.255.0; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; # Non-essential DHCP options filename "/pxelinux.0"; } [...] But I much prefer dnsmasq's "all-in-one-ness". My question: do I have to couple the -K option with something else? As shown in the example above, the ISC DHCP server requires the mac addresses of managed nodes to be explicitly specified. Does dnsmasq have something similar? FYI, the machine on which I plan to run dnsmasq runs CentOS 6.3 64bit. It has a statically assigned IP address: 192.168.1.3.

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • Why Does Wireless Gear Degrade Over Time?

    - by bahamat
    I saw this originally posted on slashdot, but their comment format is not conducive to actually getting a correct answer. Having directly experienced this phenomenon myself, I'm now asking here where I think I can actually get an educated answer. Here's the original question verbatim: Lately I have replaced several home wireless routers because the signal strength has been found to be degraded. These devices, when new (2+ years ago) would cover an entire house. Over the years, the strength seems to decrease to a point where it might only cover one or two rooms. Of the three that I have replaced for friends, I have not found a common brand, age, etc. It just seems that after time, the signal strength decreases. I know that routers are cheap and easy to replace but I'm curious what actually causes this. I would have assumed that the components would either work or not work; we would either have a full signal or have no signal. I am not an electrical engineer and I can't find the answer online so I'm reaching out to you. Can someone explain how a transmitter can slowly go bad? Common (incorrect, but repeated) answers from slashdot include: Back then your neighbors didn't have wifi, now they do. They drowning you out. I don't think this is likely because replacing the access point with a new one and using the same frequencies solves the problem. Older devices had low transmit power. Crank that baby. As mentioned by a FreeBSD wireless developer this violates regulations and can physically damage the equipment. It was also mentioned that higher power in one direction is not necessarily reciprocated. This shows higher bars, but not necessarily a better connection. Manufacturers make cheap crap designed to wear out. This one actually may be legitimate although it is overly broad. What specifically causes damage over time? Heat? Excessive power? So can anyone provide an informed answer on this? Is there any way to fix these older access points?

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • iPhone 3.1 Black Screen

    - by churnd
    Since I've updated my 3G iPhone to 3.1, it will become unresponsive after ~4-5 hours. It looks like it's off, but it's not. None of the buttons do anything that I can see. The only way I can get a screen back is to do a hard reset (Power + Home). Has anyone else had this problem? What can I do to fix it? I've had it 3 happen 3 times already. Very annoying. Apparently, the Discussions forum on Apple.com also shows lots of people are having the same problem. One guy said he did a restore and that fixed it for him, which I haven't tried yet because many others also tried that and it didn't help. A few others have said turning off WIFI fixes it for them, so I'm going to try that today. Another thing I've noticed is that now, when I plug in my iPhone to my Macbook Pro, iTunes 9 does not launch. It used to before the update. ** Further update and possible solution ** I left my iPhone plugged into my Macbook Pro yesterday while at work, and didn't use it much throughout the day. I noticed it didn't lock up once. I suspect this was because it was connected to a power source. I continued to use my iPhone with no problem yesterday and today. Still no lockups as of now. I personally think this was Spotlight and/or Genius "doing it's thing" from a new install. Kinda like an OS upgrade on your Mac... Spotlight has to rebuild itself and those first couple of hours can be kinda sluggish. This would be even more so on the iPhone where processing power is limited, and I'm sure the processor is clocked down a bit while on battery power, which would further amplify the problem. Again, just my guess. My iPhone is working fine now. ** Final Solution ** Update to 3.1.2. Problem solved.

    Read the article

  • How to improve this bash shell script for turning hardlinks into symlinks?

    - by MountainX
    This shell script is mostly the work of other people. It has gone through several iterations, and I have tweaked it slightly while also trying to fully understand how it works. I think I understand it now, but I don't have confidence to significantly alter it on my own and risk losing data when I run the altered version. So I would appreciate some expert guidance on how to improve this script. The changes I am seeking are: make it even more robust to any strange file names, if possible. It currently handles spaces in file names, but not newlines. I can live with that (because I try to find any file names with newlines and get rid of them). make it more intelligent about which file gets retained as the actual inode content and which file(s) become sym links. I would like to be able to choose to retain the file that is either a) the shortest path, b) the longest path or c) has the filename with the most alpha characters (which will probably be the most descriptive name). allow it to read the directories to process either from parameters passed in or from a file. optionally, write a long of all changes and/or all files not processed. Of all of these, #2 is the most important for me right now. I need to process some files with it and I need to improve the way it chooses which files to turn into symlinks. (I tried using things like the find option -depth without success.) Here's the current script: #!/bin/bash # clean up known problematic files first. ## find /home -type f -wholename '*Icon* ## *' -exec rm '{}' \; # Configure script environment # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set -o nounset dir='/SOME/PATH/HERE/' # For each path which has multiple links # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # (except ones containing newline) last_inode= while IFS= read -r path_info do #echo "DEBUG: path_info: '$path_info'" inode=${path_info%%:*} path=${path_info#*:} if [[ $last_inode != $inode ]]; then last_inode=$inode path_to_keep=$path else printf "ln -s\t'$path_to_keep'\t'$path'\n" rm "$path" ln -s "$path_to_keep" "$path" fi done < <( find "$dir" -type f -links +1 ! -wholename '* *' -printf '%i:%p\n' | sort --field-separator=: ) # Warn about any excluded files # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ buf=$( find "$dir" -type f -links +1 -path '* *' ) if [[ $buf != '' ]]; then echo 'Some files not processed because their paths contained newline(s):'$'\n'"$buf" fi exit 0

    Read the article

  • How to change mouse pointer icon in Xfce Debian 7 Wheezy?

    - by kadaj
    I copied the cursor theme (oxy-neon or Oxygen Neon) to /usr/share/icons and from Applications Menu - Settings - Mouse, I am able to see the new theme. I clicked on it and the pointer doesn't change. However the text typing icon ('I'), busy icon, hand icon, and resize window icons got changed. The pointer icon remains the same, the black Adwaita. I removed the Adwaita folder from the icons folder, and still the mouse pointer doesn't change. Is the pointer theme specified elsewhere? I have no setting under home directory. I tried logging out, restart, restarting xfwm4, but nothing works. I just found that the icon pointer changes when the pointer is inside Firefox, but it's not consistent. It keeps changing when I click menu items. Very weird. Any idea how to fix this? This is the output of running: gsettings list-recursively org.gnome.desktop.interface : ~$ gsettings list-recursively org.gnome.desktop.interface org.gnome.desktop.interface automatic-mnemonics true org.gnome.desktop.interface buttons-have-icons false org.gnome.desktop.interface can-change-accels false org.gnome.desktop.interface clock-format '24h' org.gnome.desktop.interface clock-show-date false org.gnome.desktop.interface clock-show-seconds false org.gnome.desktop.interface cursor-blink true org.gnome.desktop.interface cursor-blink-time 1200 org.gnome.desktop.interface cursor-blink-timeout 10 org.gnome.desktop.interface cursor-size 24 org.gnome.desktop.interface cursor-theme 'Adwaita' org.gnome.desktop.interface document-font-name 'Sans 11' org.gnome.desktop.interface enable-animations true org.gnome.desktop.interface font-name 'Cantarell 11' org.gnome.desktop.interface gtk-color-palette 'black:white:gray50:red:purple:blue:light blue:green:yellow:orange:lavender:brown:goldenrod4:dodger blue:pink:light green:gray10:gray30:gray75:gray90' org.gnome.desktop.interface gtk-color-scheme '' org.gnome.desktop.interface gtk-im-module '' org.gnome.desktop.interface gtk-im-preedit-style 'callback' org.gnome.desktop.interface gtk-im-status-style 'callback' org.gnome.desktop.interface gtk-key-theme 'Default' org.gnome.desktop.interface gtk-theme 'Adwaita' org.gnome.desktop.interface gtk-timeout-initial 200 org.gnome.desktop.interface gtk-timeout-repeat 20 org.gnome.desktop.interface icon-theme 'gnome' org.gnome.desktop.interface menubar-accel 'F10' org.gnome.desktop.interface menubar-detachable false org.gnome.desktop.interface menus-have-icons false org.gnome.desktop.interface menus-have-tearoff false org.gnome.desktop.interface monospace-font-name 'Monospace 11' org.gnome.desktop.interface show-input-method-menu true org.gnome.desktop.interface show-unicode-menu true org.gnome.desktop.interface text-scaling-factor 1.0 org.gnome.desktop.interface toolbar-detachable false org.gnome.desktop.interface toolbar-icons-size 'large' org.gnome.desktop.interface toolbar-style 'both-horiz' org.gnome.desktop.interface toolkit-accessibility false ~$

    Read the article

  • Elasticsearch won't start anymore

    - by Oleander
    I restarted my elasticsearch instance 5 days ago and I haven't manage to start it since then. I get no output in the log file /var/log/elasticsearch/ nor does the elasticsearch binary print any information when running at using elasticsearch -f. I once manage to get this output. [2012-11-15 22:51:18,427][INFO ][node ] [Piper] {0.19.11}[29584]: initializing ... [2012-11-15 22:51:18,433][INFO ][plugins ] [Piper] loaded [], sites [] Running curl http://localhost:9200 resulted in curl: (7) couldn't connect to host. I've tried increasing the memory from 3gb to 10gb, but that didn't make any diffrence. Running /etc/init.d/elasticsearch start takes 30 seconds. ps aux | grep elasticsearch results in this output. /usr/local/share/elasticsearch/bin/service/exec/elasticsearch-linux-x86-64 /usr/local/share/elasticsearch/bin/service/elasticsearch.conf wrapper.syslog.ident=elasticsearch wrapper.pidfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.pid wrapper.name=elasticsearch wrapper.displayname=ElasticSearch wrapper.daemonize=TRUE wrapper.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.status wrapper.java.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.java.status wrapper.script.version=3.5.14 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Delasticsearch-service -Des.path.home=/usr/local/share/elasticsearch -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Xms1024m -Xmx1024m -Djava.library.path=/usr/local/share/elasticsearch/bin/service/lib -classpath /usr/local/share/elasticsearch/bin/service/lib/wrapper.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/jna-3.3.0.jar:/usr/local/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/local/share/elasticsearch/lib/lucene-analyzers-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-core-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-highlighter-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-memory-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-queries-3.6.1.jar:/usr/local/share/elasticsearch/lib/snappy-java-1.0.4.1.jar:/usr/local/share/elasticsearch/lib/sigar/sigar-1.6.4.jar -Dwrapper.key=k7r81VpK3_Bb3N_5 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=23888 -Dwrapper.version=3.5.14 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.elasticsearch.bootstrap.ElasticSearchF My current system: ElasticSearch Version: 0.19.11, JVM: 23.2-b09 Ubuntu 12.04 LTS I've tried re-install elasticsearch, removing old directories. Why can't I get it to start?

    Read the article

  • ubuntu hardrive repartition without uninstalling ubuntu or windows 7 and losing data of hardrive

    - by user141692
    I have and asus r500v with 750 gb gpt system uefi motherboard core i7 3610qm, nvidia geforce gt, with ubuntu and w7 dual boot, I had problems installing ubuntu because of the grub but I fix it with https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/807801, but I still have the problem of "warning: the partition is misaligned by 3072 bytes. this may result iin very poor performance. Repartitioning is suggested" in every linux partitioin I made and my 750 gb is not being used at the maximun capacity it only uses 698 gb. I want to make partitions so that the warning doesnt show up and I can use the maximum capacity of the HDD, as I did with another dual boot laptop (compaq presario cq40). I have the following partitions: unknown 1.0Mb: partition type: lynux Basic DAta partition, device: /dev/sda2 Usage: --, Partition flags: --, partition label:-- warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -system 210 Mb FAt, usage: Filesystem, partition type: EFI system Partition, Partition Flags:--, Label: system, Device: /dev/sda1, partition label: EFI system partition, Capacity 210MB, avilable:--, Mount Point: mounted at /boot/efi -134 Mb NTFS, usage: filesystem, partition type: linux basic data partition, partition flags:.--, device: /dev/sda7, partition label: --, capacity: 134MB,available:--, mount point: not mounted -OS 250 GB NTFS, usage: file system, partititon type: linux basic data partition, partition flags: --, type: NTFS, label: OS, device: /dev/sda3, partition label: basic data partition, capacity: 250 GB, available:-, mount point: not mounted -10GB FAT 32, usage: filesystem, partition type: EFI system partition, partition flags:--, type: FAT 32, label: --, device: /dev/sda4, partition label: --, capacity: 10GB, available:--, mount point: not mounted warning: the partition is misaligned by 3072 bytes. this may result in very poor performance. repartitioning is suggested. -10gb ext 4, usage: file system, partition type: linux basic data partition, partition flags:--, type: EXT4(version1) label:--, device: /dev/sda9, partition label:--, capacity: 10 GB, available:--, mount point at / warning: the partition is misaligned by 1536 bytes. this may result in very poor performance. repartitioning is suggested. -478GB ext4, usage: filesystem, partition type: linux basic data partition, partition flags:--, type: EXT4, label:--, device: /dev/sda5, partition label:--, capacity: 478gb, available:--, mount point: mounted at /home warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. -2.0gb Swap 2.0Gb, usage: swap space, partition type: linux swap partitioin, partition flags:-, device: /dev/sda6, partition label: capacity: 2.0gb warning: the partition is misaligned by 512 bytes. this may result in very poor performance. repartitioning is suggested. and as you can see it is not well organized so please help me to organize the partitions witahout uninstalling the w7, and if possible the grub2

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >