Search Results

Search found 5245 results on 210 pages for 'confused or too stupid'.

Page 198/210 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Using Upstart to manage Unicorn w/ rbenv + bundler binstubs w/ ruby-local-exec shebang

    - by codykrieger
    Alright, this is melting my brain. It might have something to do with the fact that I don't understand Upstart as well as I should. Sorry in advance for the long question. I'm trying to use Upstart to manage a Rails app's Unicorn master process. Here is my current /etc/init/app.conf: description "app" start on runlevel [2] stop on runlevel [016] console owner # expect daemon script APP_ROOT=/home/deploy/app PATH=/home/deploy/.rbenv/shims:/home/deploy/.rbenv/bin:$PATH $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production # >> /tmp/upstart.log 2>&1 end script # respawn That works just fine - the Unicorns start up great. What's not great is that the PID detected is not of the Unicorn master, it's of an sh process. That in and of itself isn't so bad, either - if I wasn't using the automagical Unicorn zero-downtime deployment strategy. Because shortly after I send -USR2 to my Unicorn master, a new master spawns up, and the old one dies...and so does the sh process. So Upstart thinks my job has died, and I can no longer restart it with restart or stop it with stop if I want. I've played around with the config file, trying to add -D to the Unicorn line (like this: $APP_ROOT/bin/unicorn -c $APP_ROOT/config/unicorn.rb -E production -D) to daemonize Unicorn, and I added the expect daemon line, but that didn't work either. I've tried expect fork as well. Various combinations of all of those things can cause start and stop to hang, and then Upstart gets really confused about the state of the job. Then I have to restart the machine to fix it. I think Upstart is having problems detecting when/if Unicorn is forking because I'm using rbenv + the ruby-local-exec shebang in my $APP_ROOT/bin/unicorn script. Here it is: #!/usr/bin/env ruby-local-exec # # This file was generated by Bundler. # # The application 'unicorn' is installed as part of a gem, and # this file is here to facilitate running it. # require 'pathname' ENV['BUNDLE_GEMFILE'] ||= File.expand_path("../../Gemfile", Pathname.new(__FILE__).realpath) require 'rubygems' require 'bundler/setup' load Gem.bin_path('unicorn', 'unicorn') Additionally, the ruby-local-exec script looks like this: #!/usr/bin/env bash # # `ruby-local-exec` is a drop-in replacement for the standard Ruby # shebang line: # # #!/usr/bin/env ruby-local-exec # # Use it for scripts inside a project with an `.rbenv-version` # file. When you run the scripts, they'll use the project-specified # Ruby version, regardless of what directory they're run from. Useful # for e.g. running project tasks in cron scripts without needing to # `cd` into the project first. set -e export RBENV_DIR="${1%/*}" exec ruby "$@" So there's an exec in there that I'm worried about. It fires up a Ruby process, which fires up Unicorn, which may or may not daemonize itself, which all happens from an sh process in the first place...which makes me seriously doubt the ability of Upstart to track all of this nonsense. Is what I'm trying to do even possible? From what I understand, the expect stanza in Upstart can only be told (via daemon or fork) to expect a maximum of two forks.

    Read the article

  • Where's the Swap File/Partition?

    - by chrisbunney
    I'm investigating the virtual memory configuration of a Debian based Amazon EC2 instance, and as my background isn't in system admin, I'm slightly confused by what I'm seeing. We're using MongoDB, and the monitoring server we have indicates that the Mongo process is using about 20GB of swap space, however I can't figure out where this is located on the server. As far as I can tell from using the various suggested methods from Google, there is either a much smaller amount, or none at all. top indicates that there is 1.8GB of swap memory: top - 15:35:21 up 6 days, 3:23, 1 user, load average: 1.60, 1.43, 1.37 Tasks: 47 total, 2 running, 45 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 1.3%sy, 0.0%ni, 14.7%id, 83.8%wa, 0.0%hi, 0.0%si, 0.1%st Mem: 3928924k total, 2855572k used, 1073352k free, 640564k buffers Swap: 0k total, 0k used, 0k free, 1887788k cached swapon -s doesn't seem to think there's any swap space: Filename Type Size Used Priority free -m doesn't think there's any swap either: total used free shared buffers cached Mem: 3836 3663 172 0 626 2701 -/+ buffers/cache: 336 3500 Swap: 0 0 0 And neither does vmstat: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 3 0 66224 641372 2874744 0 0 21 5012 21 33 2 2 76 19 But cat /etc/fstab thinks there is a swap partition: /dev/xvda1 / ext3 defaults 1 1 /dev/xvda2 /mnt ext3 defaults 0 0 /dev/xvda3 swap swap defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 However df -k gives no indication of the xvda3 partition: Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 16513960 15675324 0 100% / tmpfs 1964460 8 1964452 1% /lib/init/rw udev 1914148 28 1914120 1% /dev tmpfs 1964460 4 1964456 1% /dev/shm So I really don't know what to make of this, because I appear to have a process using about 10 times more virtual memory than what might be available, and I have no idea where this virtual memory is on the system. I'm probably misinterpreting the output of the tools, so I'd be grateful if someone would be able to set me straight: What have I got wrong, what's the right interpretation, and how do you reach that interpretation? EDIT0: We use 10gen's MMS for monitoring the database, the relevant section for memory from the last data point is: "mem": { "virtual": 20749, "bits": 64, "supported": true, "mappedWithJournal": 20376, "mapped": 10188, "resident": 1219 }, This JSON is specific to the database process (I believe) rather than the system as a whole. fdisk -l /dev/xvda outputs... nothing? I tried each of the 3 xvda entries in /etc/fstab as well: root@ip:~# fdisk -l /dev/xvda1 Disk /dev/xvda1: 34.4 GB, 34359738368 bytes 255 heads, 63 sectors/track, 4177 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/xvda1 doesn't contain a valid partition table root@ip:~# fdisk -l /dev/xvda2 root@ip:~# fdisk -l /dev/xvda3 root@ip:~# Edit1: Output of cat /proc/meminfo for the sake of completeness: MemTotal: 3928924 kB MemFree: 726600 kB Buffers: 648368 kB Cached: 2216556 kB SwapCached: 0 kB Active: 1945100 kB Inactive: 994016 kB Active(anon): 60476 kB Inactive(anon): 12952 kB Active(file): 1884624 kB Inactive(file): 981064 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 387180 kB Writeback: 0 kB AnonPages: 73380 kB Mapped: 1188260 kB Shmem: 48 kB Slab: 149768 kB SReclaimable: 146076 kB SUnreclaim: 3692 kB KernelStack: 1104 kB PageTables: 16096 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1964460 kB Committed_AS: 305572 kB VmallocTotal: 34359738367 kB VmallocUsed: 16760 kB VmallocChunk: 34359721448 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 3932160 kB DirectMap2M: 0 kB

    Read the article

  • Understanding NFS4 (Linux server)

    - by drumfire
    I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this. This question focuses exclusively on NFS4 without Kerberos etc. 1. Exports There is ambiguous information in the exports manpage on the structure of /etc/exports. To quote from exports(5): Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. What does "subsequent exports on that line only" mean? 1.2 fsid=0 not required anymore? I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?! 2. Non-exported directory still mountable Say I have the following tree: /exp /exp/users /exp/distr /exp/distr/archlinux /exp/distr/debian And I have the following entries in this fstab entry: /dev/disk/by-label/users /mnt/users ext4 defaults 0 0 /dev/disk/by-label/distr /mnt/distr ext4 defaults 0 0 /mnt/users /exp/users none bind 0 0 /mnt/distr /exp/distr none bind 0 0 And my exports is exactly this: /exp 192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash) /exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash) And exportfs -arv shows: exporting 192.168.1.0/24:/exp/distr exporting 192.168.1.0/24:/exp Then why am I able to do this and get no error on a client: mount -t nfs4 server:/exp/users /tmp/test Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users.. 3. The odd case of showmount -d server As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read: The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export. (...) Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab. After reading this I surely wonder: Isn't it terribly insecure to just expose this type of client information; Aren't unaware server admins bound to have an rmtab with a lot of stale clients; Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted? I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)

    Read the article

  • Mac OS X 10.8 VPN Server: Bypass VPN for LAN traffic (routing LAN traffic to secondary connection)

    - by Dan Robson
    I have somewhat of an odd setup for a VPN server with OS X Mountain Lion. It's essentially being used as a bridge to bypass my company's firewall to our extranet connection - certain things our team needs to do require unfettered access to the outside, and changing IT policies to allow traffic through the main firewall is just not an option. The extranet connection is provided through a Wireless-N router (let's call it Wi-Fi X). My Mac Mini server is configured with the connection to this router as the primary connection, thus unfettered access to the internet via the router. Connections to this device on the immediate subnet are possible through the LAN port, but outside the subnet things are less reliable. I was able to configure the VPN server to provide IP addresses to clients in the 192.168.11.150-192.168.11.200 range using both PPTP and L2TP, and I'm able to connect to the extranet through the VPN using the standard Mac OS X VPN client in System Preferences, however unsurprisingly, a local address (let's call it internal.company.com) returns nothing. I tried to bypass the limitation of the VPN Server by setting up Routes in the VPN settings. Our company uses 13.x.x.x for all internal traffic, instead of 10.x.x.x, so the routing table looked something like this: IP Address ---------- Subnet Mask ---------- Configuration 0.0.0.0 248.0.0.0 Private 8.0.0.0 252.0.0.0 Private 12.0.0.0 255.0.0.0 Private 13.0.0.0 255.0.0.0 Public 14.0.0.0 254.0.0.0 Private 16.0.0.0 240.0.0.0 Private 32.0.0.0 224.0.0.0 Private 64.0.0.0 192.0.0.0 Private 128.0.0.0 128.0.0.0 Private I was under the impression that if nothing was entered here, all traffic was routed through the VPN. With something entered, only traffic specifically marked to go through the VPN would go through the VPN, and all other traffic would be up to the client to access using its own default connection. This is why I had to specifically mark every subnet except 13.x.x.x as Private. My suspicion is that since I can't reach the VPN server from outside the local subnet, it's not making a connection to the main DNS server and thus can't be reached on the larger network. I'm thinking that entering hostnames like internal.company.com aren't kicked back to the client to resolve, because the server has no idea that the IP address falls in the public range, since I suspect (probably should ping test it but don't have access to it right now) that it can't reach the DNS server to find out anything about that hostname. It seems to me that all my options for resolving this all boil down to the same type of solution: Figure out how to reach the DNS with the secondary connection on the server. I'm thinking that if I'm able to do [something] to get my server to recognize that it should also check my local gateway (let's say Server IP == 13.100.100.50 and Gateway IP == 13.100.100.1). From there Gateway IP can tell me to go find DNS Server at 13.1.1.1 and give me information about my internal network. I'm very confused about this path -- really not sure if I'm even making sense. I thought about trying to do this client side, but that doesn't make sense either, since that would add time to each and every client side setup. Plus, it just seems more logical to solve it on the server - I could either get rid of my routing table altogether or keep it - I think the only difference would be that internal traffic would also go through the server - probably an unnecessary burden on it. Any help out there? Or am I in over my head? Forward proxy or transparent proxy is also an option for me, although I have no idea how to set either of those up. (I know, Google is my friend.)

    Read the article

  • Why is IIS Anonymous authentication being used with administrative UNC drive access?

    - by Mark Lindell
    My account is local administrator on my machine. If I try to browse to a non-existent drive letter on my own box using a UNC path name: \mymachine\x$ my account would get locked out. I would also get the following warning (Event ID 100, Type “Warning”) 5 times under the “System” group in Event Viewer on my box: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: Logon failure: unknown user name or bad password. I would also get the following warning 3 times: The server was unable to logon the Windows NT account 'ourdomain\myaccount' due to the following error: The referenced account is currently locked out and may not be logged on to. On the domain controller, Event ID 680 of type “Failure Audit” would appear 4 times under the “Security” group in Event Viewer: Logon attempt by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon account: myaccount Followed by Event ID 644: User Account Locked Out: Target Account Name: myaccount Target Account ID: OURDOMAIN\myaccount Caller Machine Name: MYMACHINE Caller User Name: STAN$ Caller Domain: OURDOMAIN Caller Logon ID: (0x0,0x3E7) Followed by another 4 errors having Event ID 680. Strangely, every time I tried to browse to the UNC path I would be prompted for a user name and password, the above errors would be written to the log, and my account would be locked out. When I hit “Cancel” in response to the user name/password prompt, the following message box would display: Windows cannot find \mymachine\x$. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. I checked with others in the group using XP and they only got the above message box when browsing to a “bad” drive letter on their box. No one else was prompted for a user name/password and then locked out. So, every time I tried to browse to the “bad” drive letter, behind the scenes XP was trying to login 8 times using bad credentials (or, at least a bad password as the login was correct), causing my account to get locked out on the 4th try. Interestingly, If I tried browsing to a “good” drive such as “c$” it would work fine. As a test, I tried logging on to my box as a different login and browsing the “bad” UNC path. Strangely, my “ourdomain\myaccount” account was getting locked out – not the one I was logged in as! I was totally confused as to why the credentials for the other login were being passed. After much Googling, I found a link referring to some IIS settings I was vaguely familiar with from the past but could not see how they would affect this issue. It was related to the IIS directory security setting “Anonymous access and authentication control” located under: Control Panel/Administrative Tools/Computer Management/Services and Applications/Internet Information Services/Web Sites/Default Web Site/Properties/Directory Security/Anonymous access and authentication control/Edit/Password I found no indication while scouring the Internet that this property was related to my UNC problem. But, I did notice that this property was set to my domain user name and password. And, my password did age recently but I had not reset the password accordingly for this property. Sure enough, keying in the new password corrected the problem. I was no longer prompted for a user name/password when browsing the UNC path and the account lock-outs ceased. Now, a couple of questions: Why would an IIS setting affect the browsing of a UNC path on a local box? Why had I not encountered this problem before? My password has aged several times and I’ve never encountered this problem. And, I can’t remember the last time I updated the “Anonymous access” IIS password it’s been so long. I’ve run the script after a password reset before and never had my account locked-out due to the UNC problem (the script accesses UNC paths as a normal part of its processing). Windows Update did install “Cumulative Security Update for Internet Explorer 7 for Windows XP (KB972260)” on my box on 7/29/2009. I wonder if this is responsible.

    Read the article

  • 2 drives, slow software RAID1 (md)

    - by bart613
    Hello, I've got a server from hetzner.de (EQ4) with 2* SAMSUNG HD753LJ drives (750G 32MB cache). OS is CentOS 5 (x86_64). Drives are combined together into two RAID1 partitions: /dev/md0 which is 512MB big and has only /boot partitions /dev/md1 which is over 700GB big and is one big LVM which hosts other partitions Now, I've been running some benchmarks and it seems like even though exactly the same drives, speed differs a bit on each of them. # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 25612 MB in 1.99 seconds = 12860.70 MB/sec Timing buffered disk reads: 352 MB in 3.01 seconds = 116.80 MB/sec # hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 25524 MB in 1.99 seconds = 12815.99 MB/sec Timing buffered disk reads: 342 MB in 3.01 seconds = 113.64 MB/sec Also, when I run eg. pgbench which is stressing IO quite heavily, I can see following from iostat output: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 231.40 0.00 298.00 0.00 9683.20 32.49 0.17 0.58 0.34 10.24 sdb 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 231.40 0.00 301.80 0.00 9740.80 32.28 14.19 51.17 3.10 93.68 md1 0.00 0.00 0.00 529.60 0.00 9692.80 18.30 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 529.00 0.00 9688.00 18.31 24.51 49.91 1.81 95.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 152.40 0.00 330.60 0.00 5176.00 15.66 0.19 0.57 0.19 6.24 sdb 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 152.40 0.00 326.20 0.00 5118.40 15.69 19.96 55.36 3.01 98.16 md1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 482.80 0.00 5166.40 10.70 30.19 56.92 2.05 99.04 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 181.64 0.00 324.55 0.00 5445.11 16.78 0.15 0.45 0.21 6.87 sdb 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb2 0.00 181.84 0.00 328.54 0.00 5493.01 16.72 18.34 61.57 3.01 99.00 md1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 dm-1 0.00 0.00 0.00 506.39 0.00 5477.05 10.82 28.77 62.15 1.96 99.00 And this is completely getting me confused. How come two exactly the same specced drives have such a difference in write speed (see util%)? I haven't really paid attention to those speeds before, so perhaps that something normal -- if someone could confirm I would be really grateful. Otherwise, if someone have seen such behavior again or knows what is causing such behavior I would really appreciate answer. I'll also add that both "smartctl -a" and "hdparm -I" output are exactly the same and are not indicating any hardware problems. The slower drive was changed already two times (to new ones). Also I asked to change the drives with places, and then sda were slower and sdb quicker (so the slow one was the same drive). SATA cables were changed two times already.

    Read the article

  • Routing not working correctly using the laravel framework

    - by samayres1992
    I'm using the book wrote by one of the guys that created laravel, so I'd like to think for the most part this code isn't horribly wrong. My server is setup with nginx serving all static files and apache2 serving php. My config for each are the following: apache2: <VirtualHost *> # Host that will serve this project. ServerName litl.it # The location of our projects public directory. DocumentRoot /var/www/litl.it/laravel/public # Useful logs for debug. CustomLog /var/log/apache.access.log common ErrorLog /var/log/apache.error.log # Rewrites for pretty URLs, better not to rely on .htaccess. <Directory /var/www/litl.it/laravel/public> <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> </Directory> nginx: server { # Port that the web server will listen on. listen 80; # Host that will serve this project. server_name litl.it *.litl.it; # Useful logs for debug. access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; rewrite_log on; # The location of our projects public directory. root /var/www/litl.it/laravel/public; # Point index to the Laravel front controller. index index.php; location / { # URLs to attempt, including pretty ones. try_files $uri $uri/ /index.php?$query_string; } # Remove trailing slash to please routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } # PHP FPM configuration. location ~* \.php$ { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; try_files index index.php $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/php/$fastcgi_script_name; } # We don't need .ht files with nginx. location ~ /\.ht { deny all; } location @proxy { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; } error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 501 502 503 504 /error/5xx.html; location ^~ /error/ { internal; root /var/www/litl.it/lavarel/public/error; } } I'm including these server configs, as I feel this maybe the issue? Here is my incredibly basic routing file that should return "routing is working" on domain.com/test but instead it just returns the homepage. <?php Route::get('/', function() { return View::make('hello'); }); Route::get('/test', function() { return "routing is working"; }); Any ideas where I'm going wrong, I'm following this tutorial very closely and I'm confused why there is issue. Thanks!

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

  • Performance of Cluster Shared Volume file copy from SAN

    - by Sequenzia
    I am hoping someone can help me out with a strange issue. We are running a Microsoft Failover Cluster with Server 2008 R2 and an Equallogic PS4000 SAN. Our main configuration has 2 Dell Poweredge T710 Servers in the cluster. We have CSV and Quorm setup. The servers each have 10 Broadcom 1Gb NICs. Right now 4 of the NICS are on the iSCSI network for accessing the SAN. They use MPIO and the Dell HIT pack. We have 5 VMs running on each node and everything runs smooth. No noticeable performance issues or anything. From the SAN I can see the 4 iSCSI connections from each server to each volume (CSV and Quorm). Again, it seems to perform great. The problem I am running into is with backups. I have tried a few backup programs like backupchain and Veeam. The problem is both of them are very very slow to backup the VMs. For instance I have a 500GB (fixed disc) VHD that’s running on the cluster. It takes over 18 hours to backup that VHD and that’s with compression and depuping turned off which is supposed to be the fasted. We also have a separate server that is just for backups. It has a lot of directed attached storage. As part of the troubleshooting I decided to bring that server into the cluster as a node. It now has access to the CSV and can read from C:\clusterstorage\volume1 which is where our VHDs live. This backup server only has 2 NICs. 1 NIC is going to the iSCSI network and the other is just on the main network. It has Intel NICS in it without any sort of MPIO or teaming. So with the 3rd server now in the cluster I started doing some benchmarking. I have a test VHD that’s about 7GBs that’s stored in the CSV. I have tested file copying that VHD from all 3 servers to directed attached storage in the respective server. The 2 Dell servers that are the main nodes in the cluster (they house the VMs) are reading that file at about 20Mbs/Sec. Which at that rate is way to slow for the backups. The other server which only has 1 NIC to the SAN is reading at around 100Mbs/Sec. I spent a few hours on the phone with Dell today about this . We went through all kind of tests and he was pretty dumb founded. He really has no idea why that server with only 1 NIC is reading about 5 times as fast as the servers with 4 NICS and MPIO. We looked at the network utilization of the NICs while the file copy was going on. The servers with the 4 NICs had a small increase of activity during the file copy but they only went up to around 8-10% on all 4 NICs. The other server with the 1 NIC jumped up to over 80% during the file copy. I plan on doing some more testing after hours and calling Dell back tomorrow but I really am confused (and so is Dell’s support rep) why I cannot get faster file copy access to the CSV on those servers. Anyone have any input on this? Any feedback would be greatly appreciated. Thanks in advance.

    Read the article

  • Hardware recommendations / parts list for a modern, quiet ZFS NAS box - 2011-Feb edition [closed]

    - by dandv
    I want to build some really reliable storage for my data, and it seems that ZFS is the only filesystem at the moment that does live checksumming. That rules out DroboPro, so I'm looking to building a quiet ZFS NAS that would start with 4 2TB or larger hard drives. I'd like this system to be very reliable and relatively future-proof for 2-3 years, so I'm willing to invest some $$$ and buy higher end components. I did see questions here and on other forums about low-cost servers, but I'm not looking for those. I'd be super happy to go for an off-the-shelf solution, but I haven't found one that's quiet. I started doing the research (summarized on my wiki), but I realized that it just gets too complicated for what I know as a software dude, and I'm entering the analysis paralysis area. At this point, I'm basically looking for a parts list for a configuration that will work (and is modern), and I know there are folks around here who are way more competent than me. I've built computers and am comfortable assembling one and messing with *nix; I can follow guides; I just want to end the decision process for the hardware and software configuration. What I've researched so far (not that these are very modern components): Case: I think I've settled on the Antec Twelve Hundred case because it cools well, is quiet, and simply has 12 bays that allow elastic mounting. The SilverStone Raven is its counter-candidate, but I find its construction quite odd. For the PSU, I'm torn between Antec CP-850 and Nexus RX-8500, but I did this research more than a year ago. The Nexus has a very uniform power profile, and I'd rather not have the Antec spin up and down based on load. On the other hand, I'm not sure how often my file server will draw more than 400W under use. For the hard drives, I've read that WD Black drives are actually WD RE3 with a software setting changed. I'd also like to buy different drive types, not just 4 WDs. Recommendations? Right now I have a 2TB Hitachi Deskstar 7K300. For the motherboard, CPU and RAM I have no idea, other than the RAM must be ECC. I already asked a question here about ECC RAM, but I was misguided and was looking for a motherboard that would support USB 3.0 as well. I've learned to go with eSATA, or worry about USB later. Then there's the (liquid) cooling, Wi-Fi card, and FreeBSD vs. OpenSolaris Express. Lastly, I'm wondering if I can make this PC into a media server by adding a Blu-ray drive and a good sound card. But support for Blu-Ray is spotty on Linux, and I don't know if Windows 7 on VirtualBox would get sufficient hardware access to output HDMI or SPDIFF signals. (Running OpenSolaris virtualized is not an option because of the reliability risk.) Then there are HDCP concerns. Suggestions on that would be appreciated as well, but I don't want us to get sidetracked. A specific shopping list on the core components would be great, so I can start ordering, and in the meantime educate myself with regards to the other issues. Finally, I think this could become a great FAQ for those technically inclined to build their own ZFS server, but confused by the dizzying array of options out there, and I promise to compile the results and share my experience building and benchmarking the server.

    Read the article

  • Tomcat and ASP site under IIS6 with SSL

    - by Rafe
    I've been working on migrating our companies' website from it's original server to a new one and am having two different but possibly related problems. The box this is sitting on is a Windows 2003 server x64 running IIS 6. The Tomcat version is 5.5.x as it was the version the original deployment was built on. There are two other sites on the server one in plain HTML, another in PHP and the one I am trying to migrate is a combination of Java and ASP (the introductory/sign in pages being Java as well as many reports used for our clients and the administration pages being in ASP) First of all I can only access the site if I enter the ip followed by :8080 (xxx.xxx.xxx.xxx:8080). The original setup had an index.html file in the root of the site with a bit of javascript in the header that pointed the site to 'www.mysite.com/app/public' but if I try going directly to the site without the 8080 I get a 'page not found error' and the javascript redirector causes the same problem because it doesn't add the 8080 into the URL even though on the original site the 8080 wasn't present so I don't understand why it would need it now. The js redirect looks like this: <script language="JavaScript"> <!-- location.href = "/app/public/" location.replace("/app/public/"); //--> </script> When setting the site up I used the command line to unbind IIS from all of the ip's on the system (there are 12 ip's on this box) because I was led to believe Tomcat wanted to use localhost which wasn't accessible. I'm not sure if this was the right thing to do but I'm throwing it in for the sake of completeness. And actually, at this point trying to go to localhost from the server itself throws up a 'could not connect to localhost' error. If I go to localhost:8080 I get the tomcat welcome page. If I do localhost:8080/app/public I get the intro page to our website. So I'm not sure what I'm even looking at in this case, that is what the proper behavior should be. The second part of the problem is that if I do go to either the ip or localhost such as above (localhost:8080/app/public) and click on our login link it is supposed to transfer me to our login page yet instead I receive a 'could not connect' error and the url has changed to localhost:8443/app/secure. From my research I see that port 8443 is Tomcats SSL port and the server.xml alludes to it as follows: <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> I have an SSL certificate assigned to the site via IIS and was under the impression that by default Tomcat allowed IIS to handle secure connections but apparently something is munged because it's not working. There is another section in the server.xml that reads like this: <Connector port="8009" enableLookups="false" redirectPort="443" protocol="AJP/1.3" /> Which I'm not sure what it is for although port 443 is the SSL port that IIS uses so I'm confused as to what this is supposed to be doing. Another question I have is when does the isap_redirector actually come into play? How does it know when to try and serve pages through Tomcat and when not to? I've hunted around the 'net for an answer and have yet to find a clear dialogue on the subject. Anyone have any pointers as to where to look for a solution to all of this?

    Read the article

  • vagrant fails to bring up additional adapter for centos vm using virtual box provider

    - by Anadi Misra
    this is in continuation of the question asked here about host only adapter on dhcp I upgraded to vagrant 1.6.3 and the updated Vagrantfile to following setting for multiple adapters # add additional adapter for inter machine networking dev.vm.network :private_network, :type => "dhcp", :adapter => "2", :netmask => "255.255.255.0" it goes through creating adapters but then fails bringing up the mic on vm Anadis-MacBook-Pro:full-stack-env anadi$ vagrant up Bringing machine 'full-stack-env' up with 'virtualbox' provider... ==> full-stack-env: Clearing any previously set forwarded ports... ==> full-stack-env: Clearing any previously set network interfaces... ==> full-stack-env: Preparing network interfaces based on configuration... full-stack-env: Adapter 1: nat full-stack-env: Adapter 2: hostonly ==> full-stack-env: Forwarding ports... full-stack-env: 22 => 4223 (adapter 1) full-stack-env: 8080 => 8090 (adapter 1) ==> full-stack-env: Running 'pre-boot' VM customizations... ==> full-stack-env: Booting VM... ==> full-stack-env: Waiting for machine to boot. This may take a few minutes... full-stack-env: SSH address: 127.0.0.1:4223 full-stack-env: SSH username: vagrant full-stack-env: SSH auth method: private key full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Connection timeout. Retrying... full-stack-env: Warning: Remote connection disconnect. Retrying... ==> full-stack-env: Machine booted and ready! ==> full-stack-env: Checking for guest additions in VM... ==> full-stack-env: Setting hostname... ==> full-stack-env: Configuring and enabling network interfaces... The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! ARPCHECK=no /sbin/ifup eth 2> /dev/null Stdout from the command: Device eth does not seem to be present, delaying initialization. Stderr from the command: how ever when I log in to the environment I see two network interfaces as expected Anadis-MacBook-Pro:full-stack-env anadi$ vagrant ssh Last login: Wed Jun 4 12:54:47 2014 from 10.0.2.2 [vagrant@full-stack-env ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:BD:39:57 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:febd:3957/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:511 errors:0 dropped:0 overruns:0 frame:0 TX packets:360 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54574 (53.2 KiB) TX bytes:46675 (45.5 KiB) eth1 Link encap:Ethernet HWaddr 08:00:27:A3:86:C9 inet addr:172.28.128.3 Bcast:172.28.128.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:86c9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1360 (1.3 KiB) TX bytes:894 (894.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) I am bit confused here on why it is trying to add another mic (eth2)? In the VM I used for creating this vagrant box, I had added two NICs already.

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Pairing Bluetooth device with PIN fails

    - by Pikaro
    I'm trying to pair my old BlackBerry 8310 to my Linux desktop (up-to-date Debian Sid, 3.15-10.dmz.1-liquorix-amd64) by using blueman and its associated tools. Scanning for the device works equally well for both sides; however, I am unable to pair the two once it comes to entering the PIN. If I scan from my PC, I have two options in blueman-manager regarding my phone: Directly selecting "pair", or selecting "setup". If I select "pair", nothing happens on my desktop, but the phone asks me to enter a PIN; if I do so, it reports that pairing has failed. During that, nothing is logged to the console. Selecting "setup" opens a configuration dialog that allows for entering or generating a PIN. Regardless, I get to a screen that tells me to enter the PIN on the phone, and at the same time, the phone pops up the equivalent dialog. This would be what one would expect to work; but whatever I enter (naturally, the same on both), both devices report that pairing has failed, and blueman-manager logs init_services (/usr/lib/python2.7/dist-packages/blueman/main/Device.py:73) Loading services org.bluez.Error.AuthenticationFailed: Authentication Failed If I instead try to pair from the phone, I cannot see any kind of reaction from my desktop - all I get is the equivalent "pairing failed" message from the BlackBerry after I entered a PIN in the dialog that pops up there. hcitool scan and hciconfig -a work without complaints, but I cannot find a way to try the pairing as a whole on the console since bluez-simple-agent seems to have been discontinued and this recommendation is everywhere on Google. hcitool cc as root opens the PIN dialog on the phone, then fails with "Input/Output error" once I enter it. The user is not permitted to execute this command. I also tried creating /usr/lib/bluetooth/<MAC>/pincodes to manually define a persistent PIN, which seems to have had no effect. The same goes for running the different commands as root, though I'm really confused about the internal structure of the Bluetooth subsystem now: They usually and inconsistently failed with Python or DBUS errors or just showed the same results. The only other Bluetooth device I have around are a pair of Creative speakers. Trying "setup" asks me to enter a key on them, which is impossible. If I try "pair", I'm asked for a PIN as I should, but no pairing takes place, and no errors appear on the console. (It just repeats their name a few times.) Interestingly, I tried that before writing my question, and nothing happened in terms of PIN questions, just like with the BlackBerry, which still shows no change. I don't think I actively changed anything since then. The BlackBerry can pair with and connect to the speakers, and everything goes as one would expect, so the problem is definitely with my desktop. So thus my questions: What is that PIN window generated by, and why does it seem to appear randomly? How can I find out what, exactly, fails after trying to add the speakers, as this may give me a clue? Is there any kind of complete log that concerns itself with Bluetooth? What data can I provide to make this more solvable?

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • Can spliting an access database cause printer and reporting issues?

    - by leeand00
    We have a setup in which our users log into an access database using MS Access 2003 over an RDP connection. The user's login to their own machines first using a roaming profile. They then click an rdp connection file on the desktop and login to the remote server, via RDP, where they use MS Access as the shell; they don't have any access to any of explorer.exe features such as the start menu. The database they are logging into is more of an application, and provides functionality for entering data, querying data, and running reports via form based menus. It all worked pretty well until we split the database as it was nearing 2GBs in size. We moved out the payroll data into a separate partition, a database with the same name in a different folder, both of them on the server. Only two tables were moved into this new database partition, and they were re-linked as external tables in the new partition. Now while everything appears to be working fine data-wise after the split, there's a new issue when our users login via RDP and attempt to run reports: often the report will not display and instead the user sees an error about the click event of the form. At first I didn't even know it was printer-related, as we didn't really change anything related to the printers as far as I knew. Confused about the error, I talked to the guy who previously worked here and who was in charge of splitting the database, and he told me to tell the users to set their default printers (on their local machines, not on the server) to the "printer" Microsoft XPS Document Writer which isn't a physical printer at all. This allowed the user's to display their reports, but if they want to print out reports, they are required to go to the File menu and select Print, clicking the print icon on the toolbar takes them to a Save As... dialog as would be expected when using the Microsoft XPS Document Writer as your default printer. It's easy to tell if the user is having a problem because a quick mouseover of the printer icon will yield a tooltip of (none) when they cannot access their reports, and a tooltip of Microsoft XPS Document Writer when they can view the reports. If the user's printer is set to anything other than Microsoft XPS Document Writer as the default on their local machine, then (none) is always displayed when they rdp to the database. The RDP settings are setup to transfer the local printer to the server. Telling the users to do this to print has been more of a band-aid on the whole situation until we find a better solution and an explanation as to why splitting a database would prevent users from printing or even viewing access database reports. Which is why I'm here asking this question. Also of note all the printers on the network now show up on the server so that when the users do click File->Print to print their reports on a physical printer, they have to look through a huge list of printers to find theirs in the dropdown. So the little band-aid fix we have is not ideal. Previously, only the printers on the user's local machine displayed here, and not all the printers on the network. My co-worker seems to think this has something to do with permissions, I personally think it has to do with roaming profiles, and Group Policies which is what I've been reading up on. I really don't know how to fix this or how it is related to splitting the database.

    Read the article

  • Hosed Windows 7 permissons

    - by Anthony
    Here is the most interesting thing I've noticed since the problems started: If I go into a control panel/system module (in this case the Resource Monitor) that has a "Check Online" type option, Firefox (my default browser) opens right up without a problem. But if I just start Firefox from any shortcuts (start menu, desktop, etc), the Firefox process starts up (and the start menu icon starts glowing) only to end without notice a few seconds later. Possibly related: If I start up in Safe-Mode (w/o Networking, but haven't tried with yet), I can start up FF or Chrome just fine, but if I attempt to open Chrome normally, I get a permissions error. Opera and Safari seem to be okay (mostly). Safari crashes when I try to download any files. All of the above leads me to believe that some (but clearly not all) core files have messed up permissions. Or rather, that I no longer have permission. System still does, based on Firefox opening without fail when the system initiates it. I've run MS Forefront once in normal mode, Malwarebytes twice in normal mode and once in safe-mode. One trojan found and deleted, but the problem persists. Two other things worth mentioning: I accidentally duplicated my library... I thought I'd try to add the "Internet" folder to my start menu, next to music and downloads. The first advanced thing I tried was "create new library". I clearly misunderstood what this means. I thought it was a way to add virtual folders to the library (which I thought, in turn, would allow me to choose it as a link on the start menu), but instead it recreated my already existing user folder, AppData and all. I didn't notice this until today. Then I tried setting permissions for my User folder to full control, recursively... Confused but not giving up,I thought I could maybe create a shortcut to the NetHood folder manually, but instead got hit with an access denied error. So I tried to change the permission levels for all sub-folders to my user folder so that I had full control. I got several access denied errors along the way. At this point I gave up, went out, ended up caught in the rain and stuck on a friend's couch and showing up late for work the next day. Thanks for nothing, Microsoft. When I finally got home today (20 hours later), I noticed that Firefox was acting really strange. I tried opening Chrome to see if the problem was client side or server side, and instead got the above-mentioned "you don't have permission to open this program" alert. And I think that's the whole story. Oh, I also did a system restore, but not chose a point from this morning (an auto update), and it worked but the problem wasn't fixed. And then all the earlier restore points were gone. So the questions are: a) is there a way to set the admin and user privs back to "default"? b) would this, in anyone's expert opinion, fix the problems I'm having? c) how come being logged in as an admin isn't the same as being logged in with admin privs? It seems that half the time I have to do run as admin for fairy standard things because i'm being treated as me-theuser and not me-theadmin. Thanks for reading.

    Read the article

  • Cablemodem (SBG6580) firewall denying some outbound traffic? Why? Not configured [migrated]

    - by lairdb
    I finally got around to turning the syslog on for my cablemodem (Motorola Surfboard SBG6580) and I'm seeing about the expected amount of inbound attackage being blocked... 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:56 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 17.172.232.109,5223 --> 66.27.xx.xx,53814 DENY:Firewall interface access request 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,53385 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:02 Local0.Alert 192.168.111.1 May 31 04:58:57 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,443 --> 66.27.xx.xx,59960 DENY: Firewall interface [IP Fragmented Packet] attack 2014-05-30 21:59:10 Local0.Alert 192.168.111.1 May 31 04:59:04 2014 SYSLOG[0]: [Host 192.168.111.1] UDP 12.230.209.198,4500 --> 66.27.xx.xx,61459 DENY:Firewall interface [IP Fragmented Packet] attack ...and that's great. (Sad, but great.) But I'm also seeing a HUGE amount of what appears to be denied outbound connectivity: 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58969 --> 38.81.66.127,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58965 --> 162.222.41.13,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request 2014-05-30 16:30:10 Local0.Alert 192.168.111.1 May 30 23:30:04 2014 SYSLOG[0]: [Host 192.168.111.1] TCP 192.168.111.100,58964 --> 38.81.66.179,443 DENY: Inbound or outbound access request ...and Spot checking suggests that it's all legitimate traffic (Opening connections to CrashPlan, etc.), I have no restrictions configured in the modem; I don't see why it should be blocking anything. Am I misreading the log entry, and it's not actually being denied? (Seems unlikely.) Is the ISP (TWC) pushing deny tables that are not exposed in the UI? (Tinfoil hat too tight.) I'm confused. (The good news, such as it is, is that AFAIK I'm not experiencing any actual issues... but maybe I am; tough to tell.) Thanks.

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

  • Why is Git telling me "Your branch is ahead of 'origin/master' by 11 commits." and how do I get it t

    - by spilth
    I'm a Git newbie. I recently moved a Rails project from Subversion to Git. I followed the tutorial here: http://www.simplisticcomplexity.com/2008/03/05/cleanly-migrate-your-subversion-repository-to-a-git-repository/ I am also using unfuddle.com to store my code. I make changes on my Mac laptop on the train to/from work and then push them to unfuddle when I have a network connection using the following command: git push unfuddle master I use Capistrano for deployments and pull code from the unfuddle repository using the master branch. Lately I've noticed the following message when I run "git status" on my laptop: # On branch master # Your branch is ahead of 'origin/master' by 11 commits. # nothing to commit (working directory clean) And I'm confused as to why. I thought my laptop was the origin... but don't know if either the fact that I originally pulled from Subversion or push to Unfuddle is what's causing the message to show up. How can I: Find out where Git thinks 'origin/master' is? If it's somewhere else, how do I turn my laptop into the 'origin/master'? Get this message to go away. It makes me think Git is unhappy about something. My mac is running Git version 1.6.0.1. When I run git remote show origin as suggested by dbr, I get the following: ~/Projects/GeekFor/geekfor 10:47 AM $ git remote show origin fatal: '/Users/brian/Projects/GeekFor/gf/.git': unable to chdir or not a git archive fatal: The remote end hung up unexpectedly When I run git remote -v as suggested by Aristotle Pagaltzis, I get the following: ~/Projects/GeekFor/geekfor 10:33 AM $ git remote -v origin /Users/brian/Projects/GeekFor/gf/.git unfuddle [email protected]:spilth/geekfor.git Now, interestingly, I'm working on my project in the geekfor directory but it says my origin is my local machine in the gf directory. I believe gf was the temporary directory I used when converting my project from Subversion to Git and probably where I pushed to unfuddle from. Then I believe I checked out a fresh copy from unfuddle to the geekfor directory. So it looks like I should follow dbr's advice and do: git remote rm origin git remote add origin [email protected]:spilth/geekfor.git

    Read the article

  • iPhone App IDs and Provisioning... Does App ID get used instead of provisioning ID if I decide to us

    - by Jann
    This is a question that has been bugging me for a while. I started my app (now submitted -- not yet approved) not wishing to get into the mess that is APNS (Push). I did the following: iPhone Developer Center: Provisioning Portal-Provisioning: Then I created a Development and a Distribution Provisioning Profile. I installed both in XCode. Everything hunky dory. The Development profile scares me a bit by expiring so soon (90 days) but I can remove it from the iPhone(s) and sign it with a new one later. I tested using the Development profile, and later to submitted it by signing it with the Distribution profile. I then uploaded the Distribution profile-signed app to iTunesConnect (app store). Okay, I understand that much. Now, what I don't understand is this: Now that I understand the theories and methods behind how Push works, I am wishing to add it to my app. I already went under: iPhone Developer Center: Provisioning Portal-App IDs: and created a Development Provisioning Profile and Distribution Provisioning Profile there (push & in-app purchase enabled). Here is where it gets confusing to me. All the books and docs I have read say that I have to sign the app with this "App ID" provisioning profile (push-enabled) from now on. Does that mean I no longer ever use the previously created provisioning profiles? If I were to import these "App ID" provisioning profiles into Xcode they will exist alongside my previously generated "non-push" profiles. ~/Library/Mobile Devices/Provisioning Profiles now has 2 files. One Devel and one Distrib. It will now have 4 even though for this app I will not use the "non-push" anymore right? (actually, since they are locked by using bundle-codes and app ids i will never use it again if all of my further versions of this app use Push?) Confused. Can anyone enlighten me? Why not use the "App ID" profiles in the first place for everyone -- even if you are not gonna use push? Would keep it simpler. Should I only generate "Push Enabled" profiles from now on -- even if i am not sure I am gonna use push (or for that matter in-app purchase)? Please give me some insight. I do not wanna do this wrong. Thanks! Jann

    Read the article

  • Qt static build with static mysql plugin confusion

    - by bdiloreto
    I have built a Qt application which uses the MySQL library, but I am confused by the documentation on static versus shared builds. From the Qt documentation at http://doc.qt.nokia.com/4.7/deployment-windows.html it says: To deploy plugin-based applications we should use the shared library approach. And on http://doc.qt.nokia.com/4.7/deployment.html, it says: Static linking results in a stand-alone executable. The advantage is that you will only have a few files to deploy. The disadvantages are that the executables are large and with no flexibility and that you cannot deploy plugins. To deploy plugin-based applications, you can use the shared library approach. But on http://doc.qt.nokia.com/latest/plugins-howto.html, it seems to say the opposite, giving directions on how to use static plugins: Plugins can be linked statically against your application. If you build the static version of Qt, this is the only option for including Qt's predefined plugins. Using static plugins makes the deployment less error-prone, but has the disadvantage that no functionality from plugins can be added without a complete rebuild and redistribution of the application. ... To link statically against those plugins, you need to use the Q_IMPORT_PLUGIN() macro in your application and you need to add the required plugins to your build using QTPLUGIN. I want to build the Qt libraries statically (for easy deployment) and then use the static MySQL plugin. To do this, I did NOT use the binary distrubtion for Windows. Instead, I've started with the source qt-everywhere-opensource-src-4.7.4 Is the following the correct way to do a static build so that i can use the static MySql plugin? configure -static -debug-and-release -opensource -platform win32-msvc2010 -no-qt3support -no-webkit -no-script -plugin-sql-mysql -I C:\MySQL\include -L C:\MySQL\lib This should build the Qt libraries statically AND the static plugin to be linked at run-time, correct? I would NOT need to build the Mysql Plugin from source separately, correct? If I was to subtitute "-qt-sql-mysql" for "-plugin-sql-mysql" in above, it would include the MySQL driver directly in the QT static libraries, in which case I would NOT need to use the plugin at all, correct? Thanks for making me unconfused!

    Read the article

  • Log4r : logger inheritance, yaml configuration, alternatives ?

    - by devlearn
    Hello, I'm pretty new to ruby environments and I was looking for a nice logging framework to use it my ruby and rails applications. In my previous experiences I have successfully used log4j and log4p (the perl port) and was expecting the same level of usability (and maturity) with log4r. However I must say that there are a number of things that are not clear at all in the log4r framework. 1 Logger Inheritance The logger inheritance does not seem to be managed at all ! If I declare a logger named 'myapp' and then try to get a logger name 'myapp::engine', the lookup will end with a NameError. I would expect that the framework returns the root logger according to the naming scheme and to use the 'myapp' logger. Q1 : Of course I can work around this and manage the names by myself with a lookup method, however is there a cleaner way to do this without any extra coding ? 2 YAML configuration Second thing that confuses me is the yaml configuration. On the log4r site there are literally no information about this system, the doc links forward to missing pages, so all the info I can find about is contained in the examples directory of the gem. I was pretty confused with the fact that the yaml configuration must contain the pre_config section, and that I need to define my own levels. If I remove the pre_config secion, or replace all the “custom” levels by the standard ones ( debug, info, warn, fatal ) , the example will throw the following error : log4r/yamlconfigurator.rb:68:in `decode_yaml': Log level must be in 0..7 (ArgumentError) So there seems to be no way of using a simple file where we only declare the loggers and appenders for the framework. Q2 : I realy think that I missed something and that must be a way of providing a simple yaml conf file. Do you have any examples of such an usage ? 3 Variables substitution in XML file Q3 : The Yaml configuration system seems to provide such a feature however I was unable to find a similar feature with XML files. Any ideas ? 4 Alternatives ? I must say that I'm very disappointed by the feature level and the maturity of log4r compared to the log4j and other log4j ports. I run into this framework with a solid background of logging APIs in other languages and find myself working around in all kinds just to make 'basic things' running in a “real world application”. By that I mean a complex application composed of several gems, console/scripting apps, and a rails web front end where the configuration must be mutualized and where we make intensive usage of namespaces and inheritance. I've run several searches in order to find something more suitable or mature, but did not find anything similar. Q4 : Do you guys know any (serious) alternatives to log4r framework that could be used in a enterprise class app ? Thanks reading all of this ! I'd really appreciate any pointers, Kind Regards,

    Read the article

  • Apache Commons FileUpload Does not behave like expected

    - by Harry Pham
    I just want to assure you guys that before I post this, I have read couple of similar post but none solved my problem. So here it is. I use Ajax taglib for my form since it wont refresh my screen <%@ taglib uri="WEB-INF/taglib.tld" prefix="a" %> <a:AjaxUpload action="UploadServlet"> <input type="file"> <input type="submit" value="Upload"> </a:AjaxUpload> If this is in any confused you guys, then you can think of the above code will create these form, but send request asynchronously. <form action="UploadServlet" enctype="multipart/form-data" method="post"> <input type="file"> <input type="submit"> </form> My first question is, if I add name="file" inside the <input> to make it become like this, <input type="file" name="file">, my servlet will throw an exception. It wont if I take it out. Here is the exception: SEVERE: Servlet.service() for servlet UploadServlet threw exception javax.servlet.ServletException: Servlet execution threw an exception at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:313) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:637) My second question is: On the server side, I check the request to see if it is isMultipartContent, and it is. But then when I parseRequest() and store into List<FileItem> items, the items.size() is 0. Here is my Servlet code FileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); List<FileItem> items = null; if(request.getContentType() != null && request.getContentType().toLowerCase().indexOf("multipart/form-data") > -1){ if(upload.isMultipartContent(request)){ items = upload.parseRequest(request); //It got to here, but items.size is 0. why????????????? } }

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >