Search Results

Search found 1099 results on 44 pages for 'jan verhoeven'.

Page 34/44 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Samba fails to install

    - by jschoen
    I am running XBMC, which is built around Ubuntu 10.04. It does not come with samba pre-installed, and I need to share some media with a couple other boxes. I followed the Think Geek directions found here. I had it all set up a couple days ago, and thought I was in the clear. I rebooted this evening and when it came back up Samba was not started. I determined this by trying access the samba shares, and it would return there was an connecting to the server. I can ssh into it, so I know it is connected. In my inifinite wisdom, I figured I just messed something up and would just uninstall and reinstall. So I did: sudo apt-get purge samba and sudo apt-get purge smbfs. Then tried to follow the tutorial above again. The what I get after running sudo apt-get install samba smbfs is Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: openbsd-inetd inet-superserver smbldap-tools ldb-tools ufw smbclient The following NEW packages will be installed: samba smbfs 0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded. Need to get 0B/8,131kB of archives. After this operation, 22.6MB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package samba. (Reading database ... 57098 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb)... Selecting previously deselected package smbfs. Unpacking smbfs (from .../smbfs_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb) ... Processing triggers for ureadahead ... Setting up samba (2:3.4.7~dfsg-1ubuntu3.2) ... Generating /etc/default/samba... update-alternatives: using /usr/bin/smbstatus.samba3 to provide /usr/bin/smbstatus (smbstatus) in auto mode. smbd start/running, process 2963 **start: Job failed to start** Setting up smbfs (2:3.4.7~dfsg-1ubuntu3.2) ... The bold is my own emphasis. So I am not sure what I messed up here, or how to get back to where it was. Though I am pretty sure I made it worse than it is. I found where the logs are located, /var/logs, and found this line that seems to be the culprit. Jan 29 11:59:34 XBMCLive smbd[2806]: error opening config file So it seems to not create the configuration files. Is there a way to get samba to try to recreate them again?

    Read the article

  • Ubuntu Server Cannot Route to the Internet

    - by ejes
    I've been having this problem for weeks now, and I can't seem to figure out the problem. My server can route the local network and serves it well, however it cannot access the internet. It can't be the router because everything else on this lan can route through the router. I've even switched the ethernet port. Any help would be appreciated. I've tried all the usual places, anyway, here are the configs: root@uhs:~# uname -a Linux uhs 3.0.0-16-generic-pae #28-Ubuntu SMP Fri Jan 27 19:24:01 UTC 2012 i686 i686 i386 GNU/Linux root@uhs:~# cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface # auto eth1 # iface eth1 inet dhcp auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 root@uhs:~# ping -c 4 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_req=1 ttl=64 time=0.334 ms 64 bytes from 192.168.0.1: icmp_req=2 ttl=64 time=0.339 ms 64 bytes from 192.168.0.1: icmp_req=3 ttl=64 time=0.324 ms 64 bytes from 192.168.0.1: icmp_req=4 ttl=64 time=0.339 ms --- 192.168.0.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2997ms rtt min/avg/max/mdev = 0.324/0.334/0.339/0.006 ms root@uhs:~# ping -c 4 209.85.145.103 PING 209.85.145.103 (209.85.145.103) 56(84) bytes of data. --- 209.85.145.103 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3023ms root@uhs:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:0c:6e:a0:92:6e inet addr:192.168.0.3 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:6eff:fea0:926e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13131114 errors:0 dropped:0 overruns:0 frame:0 TX packets:10540297 errors:0 dropped:0 overruns:5 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3077922794 (3.0 GB) TX bytes:3827489734 (3.8 GB) Interrupt:10 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7721 errors:0 dropped:0 overruns:0 frame:0 TX packets:7721 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:551950 (551.9 KB) TX bytes:551950 (551.9 KB) root@uhs:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 root@uhs:~# # PRETEND Traceroute root@uhs:~# for i in {1..30}; do ping -t $i -c 1 209.85.145.103; done | grep "Time to live exceeded" root@uhs:~#

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • How to force a clock update using ntp?

    - by ysap
    I am running Ubuntu on an ARM based embedded system that lacks a battery backed RTC. The wake-up time is somewhere during 1970. Thus, I use the NTP service to update the time to the current time. I added the following line to /etc/rc.local file: sudo ntpdate -s time.nist.gov However, after startup, it still takes a couple of minutes until the time is updated, during which period I cannot work effectively with tar and make. How can I force a clock update at any given time? UPDATE 1: The following (thanks to Eric and Stephan) works fine from command line, but fails to update the clock when put in /etc/rc.local: $ date ; sudo service ntp stop ; sudo ntpdate -s time.nist.gov ; sudo service ntp start ; date Thu Jan 1 00:00:58 UTC 1970 * Stopping NTP server ntpd [ OK ] * Starting NTP server [ OK ] Thu Feb 14 18:52:21 UTC 2013 What am I doing wrong? UPDATE 2: I tried following the few suggestions that came in response to the 1st update, but nothing seems to actually do the job as required. Here's what I tried: Replace the server to us.pool.ntp.org Use explicit paths to the programs Remove the ntp service altogether and leave just sudo ntpdate ... in rc.local Remove the sudo from the above command in rc.local Using the above, the machine still starts at 1970. However, when doing this from command line once logged in (via ssh), the clock gets updated as soon as I invoke ntpdate. Last thing I did was to remove that from rc.local and place a call to ntpdate in my .bashrc file. This does update the clock as expected, and I get the true current time once the command prompt is available. However, this means that if the machine is turned on and no user is logged in, then the time never gets updates. I can, of course, reinstall the ntp service so at least the clock is updated within a few minutes from startup, but then we're back at square 1. So, is there a reason why placing the ntpdate command in rc.local does not perform the required task, while doing so in .bashrc works fine?

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • How can I diagnose/debug "maximum number of clients reached" X errors?

    - by jmtd
    Hi, I'm hitting a problem whereby X prevents processes from creating windows, uttering something like the following into ~/.xsession-errors: cannot open display: :0.0 Maximum number of clients reached Searching around there are lots of examples of people facing this problem, and sometimes people identify which program they are running is using up all the client slots. See e.g. LP 70872 (Firefox), LP 263211 (gnome-screensaver). For what it's worth, I run gnome-terminal, thunderbird, chromium-browser, empathy, tomboy and virtualbox nearly all the time, on top of the normal stuff you get with the GNOME desktop, and occasionally some other bits and pieces. However my question is not "which of my programs is causing this problem" but rather, how can one go about diagnosing this problem? In the above (and other) bugs, forum reports, etc., a number of tools are suggested: xlsclients - lists the client applications for the given display, but I don't think that corresponds to 'X clients' xrestop - a top-style X resources tool, one row per X client. Lots of '' clients, not shown in xlsclients output xwininfo -root -children lists X window objects From what I can gather, the problem might not be too many clients at all, but rather resources kept around in the X server for clients who have long-since detached. But it would also appear that you cannot (easily?) relate X resources back to their client. Can one effectively diagnose this issue once it has started to occur, or is a tedious divide-and-conquer approach for the apps I run the only approach open to me? Update Jan 2011: I think I have resolved this issue. For the benefit of anyone stumbling across this, nautilus and/or compiz or something in that chain of software was segfaulting due to a wallpaper I had. I had chosen an XML file as my wallpaper, which defined a rotating gallery of images. It was hand-made, but based on /usr/share/backgrounds/contest/background-1.xml or similar. Disabling the wallpaper and I have not had a crash since. I'm not marking this as answered yet, since the actual specific problem was not my question, but how to diagnose it was. Unfortunately this was mostly trial-and-error which sucks.

    Read the article

  • Exalytics OBI11g Partner Training 3-day hands-on Workshops

    - by Mike.Hallett(at)Oracle-BI&EPM
    These FREE to OPN Partners hands-on workshops highlight both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle's proven Business Intelligence Foundation (OBI 11g v 11.1.1.6 and Essbase) with enhanced visualization capabilities and performance optimizations. Priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam, and to Partners who have purchased an Exalytics for their own data centres to demonstrate it to their clients. Topics covered will include: Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution.You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics. Prerequisites Experience and understanding of OBIEE 11g is required ·       Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11g Introduction Workshop is highly recommended, and priority will be given to Partner individuals who have passed or scheduled to take the Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) exam. Good understanding of data warehousing and data modelling for reporting and analysis purpose.  Strong experience with database technologies preferred Attendee to provide their own laptops which must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free disk space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop.  Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Register Here for 3-day Workshops: 11-Dec-12 Birmingham UK 29-Jan-13 Utrecht NL 12-Feb-13 Frankfurt Germany 12-Mar-13 Moscow Russia

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • OSX - User home directories shared via NFS

    - by Hugh
    Hi, I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OSX, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OSX workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OSX. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OSX is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users Thanks

    Read the article

  • MySQL SSL: bad other signature confirmation

    - by samJL
    I am trying to enable SSL connections for MySQL-- SSL will show as enabled in MySQL, but I can't make any connections due to this error: ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation I am running the following: Ubuntu Version: 14.04.1 LTS (GNU/Linux 3.13.0-34-generic x86_64) MySQL Version: 5.5.38-0ubuntu0.14.04.1 OpenSSL Version: OpenSSL 1.0.1f 6 Jan 2014 I used these commands to generate my certificates (all generated in /etc/mysql): openssl genrsa -out ca-key.pem 2048 openssl req -new -x509 -nodes -days 3650 -key ca-key.pem -out ca-cert.pem -subj "/C=US/ST=NY/O=MyCompany/CN=ca" openssl req -newkey rsa:2048 -nodes -days 3650 -keyout server-key.pem -out server-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=server" openssl rsa -in server-key.pem -out server-key.pem openssl x509 -req -in server-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem openssl req -newkey rsa:2048 -nodes -days 3650 -keyout client-key.pem -out client-req.pem -subj "/C=US/ST=NY/O=MyCompany/CN=client" openssl rsa -in client-key.pem -out client-key.pem openssl x509 -req -in client-req.pem -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem I put the following in my.cnf: [mysqld] ssl-ca=/etc/mysql/ca-cert.pem ssl-cert=/etc/mysql/server-cert.pem ssl-key=/etc/mysql/server-key.pem When I attempt to connect specifying the client certificates-- I get the following error: mysql -uroot -ppassword --ssl-ca=/etc/mysql/ca-cert.pem --ssl-cert=/etc/mysql/client-cert.pem --ssl-key=/etc/mysql/client-key.pem ERROR 2026 (HY000): SSL connection error: ASN: bad other signature confirmation If I connect without SSL, I can see that MySQL has correctly loaded the certificates: mysql -uroot -ppassword --ssl=false mysql> SHOW VARIABLES LIKE '%ssl%'; +---------------+----------------------------+ | Variable_name | Value | +---------------+----------------------------+ | have_openssl | YES | | have_ssl | YES | | ssl_ca | /etc/mysql/ca-cert.pem | | ssl_capath | | | ssl_cert | /etc/mysql/server-cert.pem | | ssl_cipher | | | ssl_key | /etc/mysql/server-key.pem | +---------------+----------------------------+ 7 rows in set (0.00 sec) My generated certificates pass OpenSSL verification and modulus: openssl verify -CAfile ca-cert.pem server-cert.pem client-cert.pem server-cert.pem: OK client-cert.pem: OK What am I missing? I used this same process before on a different server and it worked- however the Ubuntu version was 12.04 LTS and the OpenSSL version was older (don't remember specifically). Has something changed with the latest OpenSSL? Any help would be appreciated!

    Read the article

  • I am getting this error on each machine after installing ruby and rails, I created one web site and

    - by Santodsh
    D:\PROJECTS\RubyOnRail\webapp\Welcome>ruby script\server => Booting WEBrick => Rails 2.3.4 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-01-31 21:19:34] INFO WEBrick 1.3.1 [2010-01-31 21:19:34] INFO ruby 1.8.6 (2007-09-24) [i386-mswin32] [2010-01-31 21:19:34] INFO WEBrick::HTTPServer#start: pid=6576 port=3000 /!\ FAILSAFE /!\ Sun Jan 31 21:19:38 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 /!\ FAILSAFE /!\ Sun Jan 31 21:19:39 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3

    Read the article

  • IIS6 Wildcard Mapping to ASP.NET - no file extension results in IIS 404

    - by Ian Robinson
    I'm trying to perform what I understand to be a relatively simple task. I'd like to remove the extensions from the URLs on my website. I have the proper set up in my application to handle and rewrite the URLs - the trouble is I can't get past IIS to actually get to my application without the extensions. The details: I'm running IIS6 on Windows Server 2003. I've gone into the web site for my application, gone to the home directory tab, clicked "Configuration" and added a wildcard map to the following file: c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll Which I verified is the same as what is used above in the application extensions portion by .ascx, etc. If I navigate to http://mywebsite.com/Blogs the result is as follows: HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Thu, 14 Jan 2010 15:04:49 GMT Which seems to be a standard IIS 404 message. If I navigate to http://mywebsite.com/Blogs.aspx I get my ASP.NET app.... How can I troubleshoot this? I feel like I've double checked everything a dozen times but to no avail. I must be missing something obvious. Update: Here are the exact instructions given by the asp.net url rewriter that I'm using: IIS 6.0 - Windows 2003 Server open property page for website / virtual directory. click the 'home directory' tab click the 'configuration' button, select the 'mappings' tab click 'insert' next to the 'Wildcard application maps' section browse to the aspnet_isapi.dll (normally at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll) Ensure that 'check that file exists' is unchecked Click OK, OK, OK to close and apply changes Update 2: I have yet to find a resolution for this. The application does not seem to be receiving the request from IIS, any further ideas?

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    Hi, My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • ubuntu 9.10 cups server cupsd.conf

    - by aaron
    i have a cups server running on ubuntu 9.10 on my home network. right now i can access it at 192.168.1.101:631, but when i try to access it at myservername.local:631 i get a 400 Bad Request. here's the relevant section from my current cupsd.conf: ServerName 192.168.1.101 # Only listen for connections from the local machine. Listen localhost:631 Listen /var/run/cups/cups.sock # any of the below 'Listen' directives all yield the same result Listen 192.168.1.101:631 #Listen *:631 #Listen myservername.local:631 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd BrowseAddress 192.168.1.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order deny,allow Deny from All Allow from 127.0.0.1 Allow from 192.168.1.* </Location> # Restrict access to the admin pages... <Location /admin> Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> i get the following in /var/log/cups/error_log: E [03/Jan/2010:18:33:41 -0600] Request from "192.168.1.100" using invalid Host: field "myservername.local:631" what do i need to do to be able to access the cups server at both 192.168.1.101:631 and myservername.local:631?

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • Mac OS X - User home directories shared via NFS

    - by Hugh
    I've run into some problems with how I've got user home directories set up on our system here. Our server is an XServe, using Open Directory to manage the user accounts. The majority of our workstations are OS X, but there are a few running Linux (Centos 5.3), and, as time goes on, we expect the proportion of Linux workstations to increase (at some point, we expect to move the server side over to Linux too, but for now we're running with what we've already got) To ensure that the Linux and OS X workstations both see user's home directories in the same place, I shared the home directories using NFS. On the server end, the home directories are stored in: /Volumes/data/company_users This is mounted on the workstations to: /mount/company_users This work fine on the Linux workstations, but there is some weirdness under OS X. For the user who is logged in through the GUI, it all works just fine. However, if a user tries to SSH into a machine that they are not the primary user on, they often have no access to their own home directory. It looks as though OS X is trying to do something else to the user home directories mount point when you log in through the GUI.... For example, on this machine (nv001), I (hugh) am logged into the GUI. Last login: Mon Mar 8 18:17:52 on ttys011 [nv001:~] hugh% ls -al /mount/company_users total 40 drwxrwxrwx 26 hugh wheel 840 27 Jan 19:09 . drwxr-xr-x 6 admin admin 204 19 Dec 18:36 .. drwx------+ 128 hugh staff 4308 27 Feb 23:36 hugh drwx------+ 26 matt staff 840 4 Dec 14:14 matt [nv001:~] hugh% So Matt's home directory is accessible to him. However, if I try to switch to him: [nv001:~] hugh% su - matt Password: su: no directory [nv001:~] hugh% Or: [nv001:~] hugh% su matt Password: tcsh: Permission denied tcsh: Trying to start from "/mount/company_users/matt" tcsh: Trying to start from "/" [nv001:/] matt% Does anyone have any idea why it might be doing this? It's causing me all sorts of problems at the moment... The only machine that I can successfully switch users at the moment is the server that the user directories are stored on, where /mount/company_users is actually just a symlink to /Volumes/data/company_users

    Read the article

  • Uninstall php5 installed from source.

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • How should I fix problems with file permissions while restoring from Time Machine?

    - by Andrew Grimm
    While restoring files from a Time Machine backup, I got the error message "The operation can’t be completed because you don’t have permission to access some of the items." because of problems with files in one folder. What's the safest way to deal with this? The folder in question has permissions like: Andrew-Grimms-MacBook-Pro:kmer agrimm$ pwd /Volumes/Time Machine Backups/Backups.backupdb/Andrew Grimm’s MacBook Pro/2010-12-09-224309/Macintosh HD/Users/agrimm/ruby/kmer Andrew-Grimms-MacBook-Pro:kmer agrimm$ ls -ltra total 6156896 drwxrwxrwx@ 19 agrimm staff 680 18 Jan 2008 Saccharomyces_cerevisiae -r--------@ 1 agrimm staff 60221852 4 Aug 2009 hs_ref_GRCh37_chrY.fa -r--------@ 1 agrimm staff 157488804 4 Aug 2009 hs_ref_GRCh37_chrX.fa (snip a few files) -r--------@ 1 agrimm staff 676063 27 Oct 2009 NC_001143.fna -rw-r--r--@ 1 agrimm staff 6148 23 Mar 2010 .DS_Store drwxr-xr-x@ 3 agrimm staff 1530 23 Mar 2010 . drwxr-xr-x@ 30 agrimm staff 1054 20 Nov 14:43 .. Is it ok to do sudo chmod, or is there a safer approach? Background: Files within the original folder on my computer also had weird permissions - I suspect I may have used sudo to copy some files from a thumbdrive onto my computer.

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • EFI pxe network boot error

    - by Lee
    Asking this on both [serverfault][1] and [superuser][2]. When attempting to network boot RHEL 5.4 on an old ia64 machine I get the following error : ![alt text][3] So I've basically followed the tutorial here : [http://www-uxsup.csx.cam.ac.uk/pub/doc/suse/sles9/adminguide-sles9/ch04s03.html][4] DHCPD,TFTPD etc are already setup and working with standard x86 PXE clients. I've unpacked the boot.img file into /tftpboot/ia64/ and passed the path to the elilo.efi file via DHCP with the filename ""; option. Changing this filename generates a PXE file not found error (see below). So I assume that PXE has found the file... ![alt text][5] The only thing wrong I can find in the logs is : Jan 6 19:49:31 dhcphost in.tftpd[31379]: tftp: client does not accept options Any ideas? I'm sure I hit a problem like this a few years ago but I can't remember the fix :) Thanks in advance! Thanks in advance! [1]: http:// serverfault.com/questions/100188/ efi-pxe-network-boot-error [2]: http:// superuser.com/questions/92295/ efi-pxe-network-boot-error [3]: http:// i.imgur.com/Zx1Jy. png [4]: http:// www-uxsup.csx.cam.ac.uk/pub/doc/suse/sles9/adminguide-sles9/ch04s03.html [5]: http:// i.imgur.com/CEzGf. jpg

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • Cannot push to GitHub from Amazon EC2 Linux instance

    - by Eli
    Having the worst luck push files to a repo from EC2 to GitHub. I have my ssh key setup and added to Github. Here are the results of ssh -v [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0g-fips 18 Jan 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Offering public key: /root/.ssh/id_rsa debug1: Remote: Forced command: gerve eliperelman 81:5f:8a:b2:42:6d:4e:8c:2d:ba:9a:8a:2b:9e:1a:90 debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • System hangs while rebooting on Debian...

    - by Usman
    Hi, I have Debian (Kernel 2.6.26-2-686) installed on two computers. On one of them it reboots quite finely but I am having following problem with rebooting Debian on my second computer. When i type reboot at the Linux prompt, following messages appear and system hangs up after saying "Restarting System": Broadcast message from root@myname (tty1) (Sun Jan 17 11:23:26 2010) The system is going down for reboot NOW! INIT: Switching to runlevel: 6 INIT: Sending processes the TERM signal Saving system clock Stopping enhanced syslog: rsyslogd. Asking all remaining processes to terminate...done. Deconfiguring network interfaces...done. Cleaning up ifupdown.... Deactivating swap...done. [ 31.789103] Restarting System. _ Normally when the sytem is busy "" sign blinks but "" at the last line above does not blink which shows, the system hanged up. I tried all keys but the screen is still frozen at the same point. The difference that I noted between my two computers is that I don't have ACPI support in the BIOS of the system which is giving me this error whereas the BIOS of my first computer do have ACPI support on which Debain do not give this restart-hanging problem. I have also disabled running the acpid script by running update-rc.d -f acpid remove but the problem still persists on the second computer. Any ideas to solve or get around this problem?

    Read the article

  • General logging won't work in MySQL

    - by leonstr
    I saw on SF that there's an option in MySQL to log all queries. So, in my version (mysql-server-5.0.45-7.el5 on CentOS 5.2) this appears to be a case of enabling the 'log' option, so I edited /etc/my.cnf to add this: [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords= log=/var/log/mysql-general.log [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I then created the file and set permissions: # touch /var/log/mysql-general.log # chown mysql. /var/log/mysql-general.log # ls -l /var/log/mysql-general.log -rw-r--r-- 1 mysql mysql 0 Jan 18 15:22 /var/log/mysql-general.log But when I start mysqld I get: 120118 15:24:18 mysqld started ^G/usr/libexec/mysqld: File '/var/log/mysql-general.log' not found (Errcode: 13) 120118 15:24:18 [ERROR] Could not use /var/log/mysql-general.log for logging (error 13). Turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it. 120118 15:24:18 InnoDB: Started; log sequence number 0 182917764 120118 15:24:18 [Note] /usr/libexec/mysqld: ready for connections. Can anyone suggest why this isn't working?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >