Search Results

Search found 17285 results on 692 pages for 'incremental build'.

Page 545/692 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • Picking a linux compatible motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • ffmpeg rotate mp4 90º

    - by shox
    Hi, can I rotate(+save / reencode) a .mp4 with ffmpeg? The only thing I found was on the mailinglist saying -vfilters "rotate=90" but ffmpeg says no vfilters. Tried -vf, it says there is no rotate. If I try to do it in VLC it simply does not rotate and kills the audio (did the vlc encoding EVER work? Every single freakin video I throw at it gets fu****d up in some way -_- ) I'm on a MAC and don't have iWork. Any ideas? Thanks FFmpeg version git-svn-r23607, Copyright (c) 2000-2010 the FFmpeg developers built on Jun 14 2010 23:52:55 with gcc 4.2.1 (Apple Inc. build 5659) configuration: libavutil 50.19. 0 / 50.19. 0 libavcodec 52.76. 0 / 52.76. 0 libavformat 52.68. 0 / 52.68. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.11. 0 / 0.11. 0 Hyper fast Audio and Video encoder

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • eee box "drobpox" server 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have a Samsung A4 small laser printer, a HP500 Designjet Plotter, a Samsung Multifunction Machine (fax/print/scan/copy), a modern HP color A3 Deskjet printer and a HP laserjet A4 color printer. All of them need to be connected to this mini server. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: - If You have something similar running for a long time - If You disagree with this hardware choice and If You would suggest some other device. - If You see any issues with my printing setup - Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • Tell VLC where to look for plugins.dat file

    - by puk
    I am trying to build vlc from source (I will include installation script below), but when I try to run vlc I get the following error main libvlc warning: cannot read /home/user/downloads/vlc3/vlc/src/.libs/vlc/plugins/plugins.dat (No such file or directory) Why is it even looking in that non existant directory? The plugins.dat file is in /usr/lib/vlc/plugins/. I tried export VLC_PLUGIN_PATH=/usr/lib/vlc/plugins/ But it still looks in that non existent path. I can create a symbolic link, but that is a terrible way to do it. If in 6 months I delete my downloads folder, all of a sudden my vlc will break. Here is the script I am running to install: ./configure --enable-rpi-omxil --enable-dvbpsi --enable-x264 --enable-xcb --with-x --enable-xvideo --enable-sdl --enable-avcodec --enable-avformat --enable-swscale --enable-mad --enable-a52 --enable-libmpeg2 --enable-dvdnav --enable-faad --enable-vorbis --enable-ogg --enable-theora --enable-mkv --enable-freetype --enable-fribidi --enable-speex --enable-flac --enable-live555 --enable-caca --enable-skins2 --enable-alsa --enable-ncurses --enable-debug --enable-lirc --enable-live555 --enable-shout --enable-taglib --enable-vcdx --enable-realrtsp --enable-svg --enable-dvdread --enable-dc1394 --enable-twolame --enable-dirac --enable-aa --enable-jack --enable-bluray --enable-opencv --enable-sftp --enable-pulse --enable-projectm --enable-vsxu --enable-atmo --enable-glspectrum '--with-extra-libs=/usr/local/lib' '--with-extra-includes=/usr/local/include' '--x-libraries=/usr/local/lib' '--x-includes=/usr/local/include' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' EDIT: I am using the following version: VLC media player 2.2.0-git Weatherwax (revision 2.1.0-git-1168-g5804dd1) And the --plugin-path option is no longer supported.

    Read the article

  • Tomcat deployment overwrites context.xml

    - by Kristoffer
    Hi, I'm pretty new to Tomcat in general, so please point out if got anything wrong. My question is regarding updates to already deployed apps, using the Tomcat manager. But first thing first. I'm using the META-INF/Context.xml for storing connection info for the database connections, so this is unique to every server the application is deployed to. I'm not sure if this is optimal but it's the only way I know. So, when updating the application, it's important that this file doesn't get modified, because I don't want to have to go in and remake all changes every time I update my app. For updating, I'm using the Tomcat Manager, and I've tried different approaches but everything seems to build on the process of undeploy, then deploy the new version. This way, the Context.xml gets removed/replaced by an empty Context.xml file. So my question is basically, how do I update a running webapp, and at the same time having the Context.xml left untouched? Btw, I'm running Tomcat 6.0.24.

    Read the article

  • Networking lost after update from Debian Wheezy to Jessie

    - by Charaf
    I am currently setting a Virtual Machine for development purposes. I did a big part of this configuration under Wheezy, but I need some debs that were available only on Jessie. So, I've updated the sources.list and did a dist-upgrade. Everything went well, but after the reboot, I noticed that I lost all the networking. Repositories are unreachable, as well as a simple ping google.fr returns nothing. What can I do to quickly restore networking so that I can continue my working. I have a poor connexion and can not afford to download the whole install DVDs. root@vm~# ifconfig lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:65536 Metric 1 RX packets:452 errors:0 dropped:0 overruns:0 frame:0 TX packets:452 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:164238 (160.3 KiB) TX bytes:164238 (160.3 KiB) root@vm~# I am running VMware 1.0.1 build 1379776 and the last update of Jessie (debian 3.14.4-1) Please help. Thanks.

    Read the article

  • GlassFish v3: Security related updates + Repository/Publisher?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the contents of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the dev repository for security updates, or does it contain unstable updates? (The name suggests unstable updates, but the version numbers, like "3.0.1,0-11:20100331T082227Z" leave me guessing. The build is more than a week old, so it's obviously not "nightly" or "weekly", but what is it?) Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • How to install RMagick RubyGem on Mac OS X 10.6 Snow Leopard?

    - by misbehavens
    I am getting this error while trying to install RMagick: $ sudo gem install rmagick Building native extensions. This could take a while... ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... no Can't install RMagick 2.13.1. Can't find Magick-config in /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin:~/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out How can I install the RMagick RubyGem on Snow Leopard?

    Read the article

  • Stop Picasa (Mac) from scanning my harddrive

    - by Bodyscanner
    I want to use Picasa desktop app instead of the tedious & clunky web interface to share a couple of photos. Every time I launch Picasa it proceeds to open an annoying pop-up/tool-tip which flicks through every file on my HD using <=95% of CPU. I don't want this so I click the X. It appears again. I try to drag it somewhere less annoying onscreen but it pings back. I look in prefs for an option to turn it off. I give up and quit app until a new build comes out, which I download and repeat the above. WTF?! I understand Google can't be as cool as Apple - iPhoto isn't perfect by any means but at least it looks nice and 'just works'. I want to launch Picasa, not have it go through everything, not have 1000's of random pics and HD cruft on display in the list, and then perhaps drag in a few photos and upload them. Any idea of if that is possible? </rant>

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • RemoteApp .rdp embed creds?

    - by Chris_K
    Windows 2008 R2 server running Remote Desktop Services (what we used to call Terminal Services back in the olden days). This server is the entry point into a hosted application -- you could call it Software as a Service I suppose. We have 3rd party clients connecting to use it. Using RemoteApp Manager to build RemoteApp .rdp shortcuts to distribute to client workstations. These workstations are not in the same domain as the RDS server. There is no trust relationship between domains (nor will there be). There is a tightly controlled site to site VPN between workstations and the RDS server, we're quite confident we have access to the server locked down. The remoteApp being run is an ERP application with its own authentication scheme. The issue? I'm trying to avoid the need to create AD logins for every end user when connecting to the RemoteApp server. In fact, since we're doing a remoteApp and they have to authenticate to that app, I'd rather just not prompt them at all for AD creds. I certainly don't want them caught up in managing AD passwords (and periodic expirations) for accounts they only use to get to their ERP login. However, I can't figure out how to embed AD creds in a RemoteApp .rdp file. I don't really want to turn off all authentication on the RDS server at that level. Any good options? My goal is to make this as seamless as possible for the end-users. Clarifying questions are welcome.

    Read the article

  • JBoss7 load balancing with mod_proxy_balancer - session not working

    - by Phil P.
    I am trying to set up mod_proxy_balancer for routing requests to 2 jboss7-servers. For the time being I am testing this setup on my local machine, using following config in httpd.conf: ProxyRequests Off <Proxy \*> Order deny,allow Deny from all </Proxy> ProxyPass / balancer://mycluster/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On <Proxy balancer://mycluster> BalancerMember http://localhost:8080 route=node1 BalancerMember http://localhost:8081 route=node2 Order allow,deny Allow from all </Proxy> and in the standalone.xml file of each jboss I have defined the jvmRoute system property: <system-properties> <property name="jvmRoute" value="node1"/> </system-properties> At http:// localhost/myapp the application is accessible but the java-session is not build up correctly. Consequently the authentication is not working. The funny thing is, that everything is working if I turn off one JBoss-instance. As I have tried a couple of settings already, I am thankful for any further suggestions.

    Read the article

  • Laptop will not boot

    - by WillumMaguire
    This is a dell studio 1558 laptop. Now, something is wrong with the charger that it won't charge the laptop, but the laptop can turn on and operate properly as long as it is attached. It has been like this for a while, but it's not the problem. My problem is that as of yesterday, It takes several minutes to get past the "dell" startup logo (where is says "f2 setup" and "f12 boot options"). After it gets past, it beeps as normal to tell me about the charger and gives me the f2/f12 options and f1 to continue as normal. I can press f12 to get into boot options and load into my live USB BackTrack 5 ISO, but after "startx" it just stays at a black screen. I can also access BIOS setup, but see nothing that would help the problem. When I boot to the HDD, it gives me this Intel UNDI, PXE-2.1 (build 083) Realktek PCIe GBE Family Controller Series V.2.29 (06/30/09) PXE-E61: Media test failure, check cable PXE-M0F: Exiting PXE ROM Operating System not found Also, pressing f8 gives me the same results as booting as normal. It is running Windows 7 Ultimate, dual-core Intel i3 @ 2.27ghz and 4gb RAM. I think there is an issue with the HDD, as the "Operating System not found" would lead me to believe. Is this a fixable problem?

    Read the article

  • Problem building PHP extension module

    - by tixrus
    I'm trying to get GD into my PHP. So I compiled it, made php.ini point to it, restarted apache etc. But no GD. So in apache error log it says PHP Warning: PHP Startup: gd: Unable to initialize module\nModule compiled with module API=20060613\nPHP compiled with module API=20090115\nThese options need to match\n in Unknown on line 0 So a bit of googling says I should not use the phpize I have before configuring and making these. I should use a new one called phpize5. I surely don't have any such thing. Unless its packed up inside something else in my php5.3. distro. Where do you get it. In Ubuntu I could just run sudo apt-get install php-dev, (apparently) and it would just appear by magic. At least that's what the webpage said. Unfortunately I am running MacOSX version Leopard. How can I build this GD module so that it will match the API number in my PHP?

    Read the article

  • How do I find funny pictures?

    - by Hanno Fietz
    No, not lolcats. And I'm not really looking for a specific site, either. I have often wished that I had some funny picture to illustrate a presentation, a website, a post, an email, or something else. Google image search and stock photo services have hardly ever helped me, although that may be because I'm doing something wrong. Jeff Atwood seems to have no problem to find funny pictures for his codinghorror and stackoverflow blogs, as well as for the error messages on the trilogy sites. One of my favourites was this elephant. Other bloggers also seem to be quite good at it. I'm wondering if I simply lack the creativity or if there's sources or methods I don't know about. I could think of the following ways to get pictures, but I'm not sure whether this is really "how they do it". keep a collection of pictures that you stumbled upon and liked (takes quite some time to build up to a proper library), when you need a picture, there's one in there maybe have pictures on paper, too, like from magazines or ads. when you are looking for a picture, search online (Flickr, Google, stock photos). This has never really worked for me, I don't know why. produce the pictures yourself, i. e. have a good library of source material or find some online and apply some creativity and suitable software. I could imagine that this could work well once you have the practice.

    Read the article

  • Slow parity initialization of RAID-5 array on HP Smart Array 411 controller

    - by Rob Nicholson
    On 29th October 2011, I built a RAID-5 array using 4 x 146.8GB Seagate SAS ST3146855SS drives running at 15k connected to a PowerEdge R515 with HP Smart Array P411 controller running Windows 2008 (so nothing particularly unusual). I know that parity initialisation of a RAID-5 array can take some time but it's still running after 2.5 weeks which seems a little unusual. I'd previously built another array on the same controller using 4 x 2TB SATA-2 drives and that did take a while to complete but a) I'm sure it was less than 2.5 weeks, b) that array was ~12 times bigger and c) during initialization, the percentrage slowly increased each day. At the moment, the status display for this new 2nd array simply says "Parity Initialization Status: In Progress" and it's said that since the start. It's this lack of change on the status that worries me the most - feels like it's not actually doing anything. Do you think something has gone wrong or am I being unpatient and for some reason, the status not increasing is normal? I kind of expected a much smaller array on faster drives (15k SAS versus 7.5k SATA-2) to build in a few days. This is our primary SAN running StarWind so my "have a play" options are very limited. This 2nd array is currently in use for one small virtual disk so I could shut the target machine down, move the virtual disk to another drive and try rebuilding.

    Read the article

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • EEE PC dropbox server running 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have Samsung SCX 4521F (fax/print/scan/copy), Samsung ML2010, HP Laser jet P1006, HP Color Laserjet CP1215, HP Office jet pro K8600, HP Design jet 500. All of them now are connected using little print servers and I want to get rid of them hooking everything to this mini server. The fax/print/scan/copy machine need to stay connected to a PC to make me able to use the software that comes with It. The mini server would save me on this too. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: If You have something similar running for a long time If You disagree with this hardware choice and If You would suggest some other device. If You see any issues with my printing setup Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • ping/ssh networking problem with server from 1 particular windows xp laptop

    - by user47650
    I am experiencing an odd problem with one specific server at my data centre connecting from my laptop. Basically the server is accessible from other machines in my house, but not from 1 particular laptop which is running windows XP. I have setup tcpdump on the server and wireshark on the laptop, and I can see ping echo request and reply packets that actually make it back to the wireshark on the laptop, but nothing shows in the ping console output like so; $ ping xxx.55.32.255 Pinging xxx.55.32.255 with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for xxx.55.32.255: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), But I can see from the wireshark on my local laptop that the ping reply gets back... No. Time Source Destination Protocol Info 46 3.964474 192.168.1.64 xxx.55.32.255 ICMP Echo (ping) request Frame 46 (74 bytes on wire, 74 bytes captured) Ethernet II, Src: Intel_31:d3:01 (00:19:d2:42:c3:01), Dst: ThomsonT_01:b8:2c (00:14:7f:02:b9:3c) Internet Protocol, Src: 192.168.1.64 (192.168.1.64), Dst: xxx.55.32.255 (xxx.55.32.255) Internet Control Message Protocol No. Time Source Destination Protocol Info 48 4.119060 xxx.55.32.255 192.168.1.64 ICMP Echo (ping) reply Frame 48 (74 bytes on wire, 74 bytes captured) Ethernet II, Src: ThomsonT_01:b8:2c (00:14:7f:01:b8:2c), Dst: Intel_21:c3:01 (10:20:d2:31:c3:01) Internet Protocol, Src: xxx.55.32.255 (xxx.55.32.255), Dst: 192.168.1.64 (192.168.1.64) Internet Control Message Protocol obviously I have disabled the windows firewall and there is nothing in the windows event log. There is nothing else obviously strange about the server as it is the same build as other servers that I can connect to fine.

    Read the article

  • How do I find information about a particular trojan? "W32/Smalltroj.XVGT", as reported by Norman

    - by Lasse V. Karlsen
    I tried checking the Norman antivirus page, Virus-descriptions, but sadly it seems Norman has intentionally obfuscated their search results (I tried clicking on W, and it seems they just list viruses with a W somewhere in the description, instead of more typical, all viruses with a name starting with a W.) Is there a common virus-list somewhere, or is it as I suspect, every antivirus manufacturer is free to come up with their own identification tags for each virus? Several "vshost32.exe" files, related to Microsoft Visual Studio 2008, has been quarantined on our server today, probably related to a test-deployment of some internal software. Some developer machines that have grabbed that latest version of our program has also had the same files quarantined. Now, these files should not have been deployed in the first case, so I'll be looking into that, but whenever any developer now builds a program locally and attempts to debug, the same file is placed in the build output directory, and promptly quarantined. Does anyone have any clues as to how I can go about verifying this before I pointedly ask the antivirus software to go take a hike on this particular virus? Edit: I've copied one of the quarantined files manually to a machine over the network that doesn't have antivirus installed, and compared the file on that machine with a local copy (on that machine) of the vshost32.exe template file, and they're bit-for-bit identical. I guess this is a false positive. I still would like to know if it would be possible for me to verify this in any other way though, since next time such a trojan might be reported in a compiled file that we won't have a pristine copy of.

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >