Search Results

Search found 12766 results on 511 pages for 'little b'.

Page 334/511 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • New Dell PE R710 - Storage Question

    - by rihatum
    Hi All, Dell PE R710, received from Dell in the following state : Windows Disk 0 1800GB ( Volume C & D ) Windows Disk 1 526 GB (Volume E ) Perc6i Integrated Raid Controller 6 x 500GB Nearline SAS 7200RPM HDDs Raid 5 Configuration with two Virtual Disks I have installed Dell open Manage and it shows the following : Virtual Disk 0 - State : Background Initialization ( 7% ) Virtual Disk 1 - State : Background Initialization ( 25% ) Now when I click on Virtual Disk 0 it shows me all 6 Disks and the same happens when I click on Virtual Disk 1 it displays all 6 disks. But when I click on Storage Perc6i Connector 0 I get 4 Physical disks with the following numbers : Physical Disk 0:0:0 Physical Disk 0:0:1 Physical Disk 0:0:2 Physical Disk 0:0:3 When I click on Storage Perc6i Connector 1 I get 2 Physical Disks Listed in the following way : Physical Disk 1:0:4 Physical Disk 1:0:5 I am a little confused in this description, does this 1:0:4 interprets to Controller1, Disk4. Does this integrated raid card have two controllers coming out of it ? Also, When I first switched on the machine, the boot partition was showing 1GB Available out of 40GB, now its showing 38GB available out of 40GB. Is this because the Virtual Disks are still Initializing ? Any recommendations or suggestions ? Also, this server have 6 x 500GB NearLine SAS Hard drives, what would be a good raid config ? We are planning to use it for Hyper-V with quite a few (7 or 8) virtual servers, your suggestions would be helpful. Also, while the virtual disks are in a initialization state, can I destroy and re-create the raid configuration ? I would have to do it at the BIOS CTRL-M ? Thanks and Regards

    Read the article

  • Windows Clients: Windows or Linux Domain Controller?

    - by Ramon Marco Navarro
    I'm planning to set up a domain controller for our small computer laboratory. I'm a little confused as to what operating system to use for our domain controller. What's in the lab: The lab has 25 units running a mix of Windows 7 and Windows XP. The domain controller will only have 2GB of RAM running a C2D E7200. (Is this enough?) What we want: The Domain Controller will also be running a git server. The Domain Controller will also be used as a general development machine (mostly Java, PHP). A way to centralize the updates for the windows clients, so that they won't have to download the same patches from the remote site. The machines would just query them from the local domain controller and get the updates from there. Our head recommended that I virtualize a Windows Server 2008 system under a Linux host and use the former as a domain controller and the latter for development or the other way around. A comparison of the advantages and disadvantages of using a Linux distribution or Windows Server 2008 in this situation would also be appreciated. As you may have noticed by now, I'm kinda new to setting up a domain so I hope you guys will be able to help me. Thank you.

    Read the article

  • How to troubleshoot web server lock-up (Debian Squeeze)

    - by Ryan
    Every once in a while, my web server slows so significantly, it seems locked up. Can't SSH in, no sites being served. It's a VPS that started out as Debian 5 which I upgraded to testing (squeeze). It's a typical LAMP set-up with the sole purpose of running a couple of wordpress sites. One time when it locked up, I got to one of the sites, but it was wordpress complaining it couldn't establish a database connection. So it seemed as if something was really chewing up the CPU and mysqld either timed out, or possibly failed and couldn't restart. But since I couldn't SSH in I feel more inclined to attribute it to CPU. But the only processes running now, aside from OS and kernel stuff: apache mysqld python (for fail2ban) sshd exim4 It has 512M of RAM and 1.5 GB of swap. Every time I check on it, it has plenty of free memory and is using virtually no swap (usually 2-3M). And since I am running fail2ban I don't think I'm getting ddosed. I did find this in my logwatch email this morning (it locked up late last night, when there would have been very little traffic): 6 Time(s): [<ffffffff810a0ebc>] ? oom_kill_process+0x7e/0x23d 6 Time(s): [<ffffffff810a1505>] ? __out_of_memory+0x12a/0x141 6 Time(s): [<ffffffff810a1586>] ? out_of_memory+0x6a/0x94 I didn't find anything else suspicious. It can't be my provider's host because I can SSH in and restart the VM, and everything seems fine. Anybody know which logs I should start poring through to find the core of my problem? Thanks guys.

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • Reverse Proxies and AJAX

    - by osij2is
    A client of ours is using IBM/Tivoli WebSEAL, a reverse-proxy server for some of their internal users. Our web application (ASP.NET 2.0) and is a fairly straightforward web/database application. Currently, our client users that are going through the WebSEAL proxy are having problems with a .NET 3rd party control. Users who are not going through the proxy have no issues. The 3rd party control is nothing more than an AJAX dynamic tree that on each click requests all the nodes for each leaf. Now our clients claim that once users click on a node in the control, the control itself freezes in such a way that they don't see anything populate. Users see "Loading..." message appear but no new activity there afterwards. They have to leave the page and go back to the original page in order to view the new nodes. I've never worked with a reverse proxy before so I have googled quite a bit on the subject even found an article on SF. IBM/Tivoli has mentioned this issue before but this is about all they mention at all. While the IBM doc is very helpful, all of our AJAX is from the 3rd party control. I've tried troubleshooting using Firebug but by not being behind the reverse proxy, I'm unable to truly replicate the problem. My question is: does anyone have experience with reverse proxies and issues with AJAX sites? How can I go about proving what the exact issue is? Currently we're negotiating remote access so assume for the greater part that I will have access to a machine that's using the WebSEAL proxy. P.S. I realize this question might teeter on the StackOverFlow/ServerFault jurisdictional debate, but I'm trying to investigate from the systems perspective. I have no experience with reverse proxies (and I'm unclear on the benefits) and little with forwarding proxies.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • Enabling syntax highlighting for LESS in Programmer's Notepad?

    - by Cody Gray
    When I don't feel like firing up the Visual Studio behemoth, or when I don't have it installed, I always turn to Programmer's Notepad. It's an amazingly light and fast little text editor, with the special advantage that it is completely platform-native and conforms to standard UI conventions. Therefore, please do not suggest that I consider using other text editors. I've already considered and rejected them because they do not use native UI controls. I like Programmer's Notepad, thank you very much. Unfortunately, I've recently begun to learn, use, and love LESS for all of my CSS coding needs, and it appears that Programmer's Notepad is not bundled with a syntax highlighting scheme for LESS. Does anyone know if there is—by chance and good fortune—one already available somewhere on the web that some kind soul has tediously prepared? If not, how can I go about writing one of my own? Is there a way to build on the existing CSS scheme? It's also possible that any code coloring scheme designed for Scintilla-based editors will work, as Programmer's Notepad is based on the Scintilla control. If you know of a LESS highlighting scheme for Scintilla-based editors, and how to use that with Programmer's Notepad, please suggest that as well.

    Read the article

  • Bind9 configured to start at boot, has to be started manually

    - by antik
    I've configured bind9 on my system and it works great when it runs. It's currently configured to be run at runlevel 2 by setting: $ sudo update-rc.d bind9 enable 2 This appears to have done its work: $ tree -f /etc/rc?.d | grep -e ".*bind9$" |-- /etc/rc0.d/K85bind9 -> ../init.d/bind9 |-- /etc/rc2.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc3.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc4.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc5.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc6.d/K85bind9 -> ../init.d/bind9 Booting the system, I believe I am at runlevel 2: $ runlevel N 2 Given the above configuration, when the system is rebooted, bind does not come up. Only on occasion, for some reason, can I resolve hostnames immediately after startup. Far more often than not however, I cannot. I can interrogate the service's status: $ sudo /etc/init.d/bind9 status * could not access PID file for bind9 When the service doesn't start, I can start it successfully via a terminal by issuing $ sudo /etc/init.d/bind9 start And it works great from then on. Loopback configuration: $ ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1872 errors:0 dropped:0 overruns:0 frame:0 TX packets:1872 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:220205 (220.2 KB) TX bytes:220205 (220.2 KB) Do I have my startup misconfigured? (I'm used to Gentoo so Ubuntu's model is still a little new to me) I'm not seeing any log indication of a failed attempt to start at boot in syslog. Is there someplace else I should be looking? What else should I look into to get bind working at startup?

    Read the article

  • Route specific HTTP requests through pfSense OpenVPN

    - by DennisQ
    Hi, to start, I have very little knowledge on routes, iptables, etc. That said, here's what I'm trying to accomplish and where I think I'm stumped: Problem: We have an external website which we recently firewalled so it only accepts traffic from our office IP addresses. This works well at the office, but doesn't work for remote access through VPN as we don't route all traffic through OpenVPN. I would rather avoid forcing everyone to route all traffic through just to accommodate this one site. Environment: Main router box is running pfSense. Em0 is internal IP, Em1 is external. Internal net is 10.23.x and VPN is 10.0.8.0/24 I believe what I need to do is add a route to the VPN server config to send all traffic to that IP over the VPN tunnel. I think that part's working, but I don't get a response back, so I'm assuming that I need some NAT config on the VPN server to route the response back over the tunnel? What I've found so far is to try the following, but since this is a pfSense box on FreeBSD, I can't run iptables, etc. Make sure ip forwarding is enabled: echo 1 /proc/sys/net/ipv4/ip_forward Setup NAT back out: iptables -t nat -A POSTROUTING -s 10.0.8.0/24 -o em0 -j MASQUERADE Am I on the right path, and if so how do I accomplish this through pfSense UI or FreeBSD CLI? Thanks!

    Read the article

  • Overheating computer

    - by Samurai Waffle
    My computer overheats somewhat frequently, usually during intense use. And by intense use I mean browsing the internet while downloading, or gaming. It even overheats on extremely old games though, Master of Orion 2, which was developed for Windows 95. My computer has a Pentium 4 Ghz processor, 2 GB of ram, and is running Windows XP. One of the symptoms it has after overheating, is that it'll turn on immediately afterwards, but won't show any video on my monitor. I usually have to wait at least 5 minutes (mostly at least 10) before I can get it to turn on and show video on my monitor. I also usually have to wiggle around the graphics card a little bit, which is the ASUS A9550 Series with 256 MB. I'm not sure exactly what is causing the computer to overheat. At first I thought it was the video card, but after I noticed it was doing it while playing Master of Orion 2, I'm not so sure, because that game can't be making the video card work all that hard. So how exactly can I pinpoint the problem? Thanks for any help provided. Edit: Okay I downloaded the programs that you specified, and will start benchmarking my system to try and pinpoint what's overheating. What is the temperature range for when it's getting to hot? Also I have an abundant amount of software experience with computers, but unfortunately not to much hardware experience.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • installing SVN - CentOS - cannot find -lexpat

    - by furnace
    Hey guys, I'm trying to install SVN on CentOS 5. Unfortunately a simple yum install isn't going to work (afaik) because I'm using the DirectAdmin control panel. When it comes to running 'make' I get this error: /usr/bin/ld: cannot find -lexpat I'm new to installing things without yum (!) so am a bit lost. Do you have any advice on how to get past this hurdle? Just to give a little more context to the error; /apache -I/usr/include/apache -I/etc/svn-install/subversion-1.6.2/sqlite-amalgamation -o subversion/svn/util.o -c subversion/svn/util.c cd subversion/svn && /bin/sh /etc/svn-install/subversion-1.6.2/libtool --tag=CC --silent --mode=link gcc -g -O2 -g -O2 -pthread -rpath /usr/lib -o svn add-cmd.o blame-cmd.o cat-cmd.o changelist-cmd.o checkout-cmd.o cleanup-cmd.o commit-cmd.o conflict-callbacks.o copy-cmd.o delete-cmd.o diff-cmd.o export-cmd.o help-cmd.o import-cmd.o info-cmd.o list-cmd.o lock-cmd.o log-cmd.o main.o merge-cmd.o mergeinfo-cmd.o mkdir-cmd.o move-cmd.o notify.o propdel-cmd.o propedit-cmd.o propget-cmd.o proplist-cmd.o props.o propset-cmd.o resolve-cmd.o resolved-cmd.o revert-cmd.o status-cmd.o status.o switch-cmd.o tree-conflicts.o unlock-cmd.o update-cmd.o util.o ../../subversion/libsvn_client/libsvn_client-1.la ../../subversion/libsvn_wc/libsvn_wc-1.la ../../subversion/libsvn_ra/libsvn_ra-1.la ../../subversion/libsvn_delta/libsvn_delta-1.la ../../subversion/libsvn_diff/libsvn_diff-1.la ../../subversion/libsvn_subr/libsvn_subr-1.la /etc/httpd/lib/libaprutil-1.la -lexpat /etc/httpd/lib/libapr-1.la -luuid -lrt -lcrypt -lpthread -ldl /usr/bin/ld: cannot find -lexpat collect2: ld returned 1 exit status make: *** [subversion/svn/svn] Error 1 Thanks!

    Read the article

  • How do Opera's keyboard shortcut settings work?

    - by zem
    Firefox 4 beta is starting to freeze up the way Firefox 3 used to on my machine, and I want a browser that'll play gifs at full speed, so on a Mac my only other choice seems to be Opera. There are just two issues I have with it right now: one, the scrolling is weird compared to every other Mac application, but I can get used to that if there's no way to fix it. Two, cmd-1 through cmd-9 activate the "speed dial" bookmarks instead of selecting tabs 1-9, like in Firefox and Chrome. I can disable those shortcuts easily enough, so I don't keep accidentally loading a different page when I instinctively try to do that, but in an ideal world I could remap those commands to do what I want. The keyboard shortcut editor is weird: There seems to be a scrappy little language for associating actions with commands. It has some limited autocompletion when you type stuff in, and I couldn't find a "select specific tab" action, but some of the existing commands are complicated enough that I'd be surprised if there's not a way to do it. Is there documentation for this language anywhere? Clicking "help" just brings me to this page, which is not very helpful.

    Read the article

  • How to install an OS on a external hard drive

    - by Nrew
    I made a little research before coming here. And found out that I need to disconnect all internal hard drive before proceeding. http://www.pendrivelinux.com/installing-ubuntu-to-a-usb-hard-drive/ Here's my question: If I install Windows XP or Ubuntu on an external hard drive. Would it be universal? Can I use it or run it on any computer. Assuming that the bios allows you to boot from USB hard drive. Or even not because there's PLoP Bootmanager And has the considerable amount of memory and processor power to run the OS. What other things to consider when installing an OS in an external hard drive? Is installing in the external hard drive the same as when installing in an internal hard drive? Can I also boot multiple OS? What are the things to consider when doing this? And if you have a tutorial there. Showing how to install an OS in an external hard drive. Please link.

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -evv jre-6u20-linux-i586.rpm D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Name rdonly mode=0x0 error: package jre-6u20-linux-i586.rpm is not installed D: closed db index /var/lib/rpm/Name D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages [root@host java]# rpm -qi --scripts jre-6u20-linux-i586.rpm package jre-6u20-linux-i586.rpm is not installed [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • RAID1 Broken Mirroring

    - by Sanoj
    I'm having a little server with Windows Small Business Server 2003. I'm using RAID1, via a HighPoint Rocket RAID 1640 RAID-card, using two harddrives. This week the server alarmed, and durig reboot I got the error-message Broken Mirroring (User Manual page 30). I had a few alternatives (see the manual), first I tried Continue, but the server restarted during boot. Next time I took Power Off, and replaced the oldest harddrive with a new one, and when I booted, I selected Rebuild. Then I selected the new harddrive to be the new one. The rebuild-procedure started and a progress bar at 0% showed up, but after a few seconds I got the message Copy Failed!, then the server booted and Windows Server started. Now it works fine. But I guess that I'm just using one harddrive now, and it's not mirrored. I haven't touched the server since then (two days ago). What should I do now? I have no experience of this situation. Anyone that have some guidance?

    Read the article

  • Why does my dd backup of MacBook OS X fail to boot upon restore?

    - by James
    I created a backup of a MacBook hard drive (WD2500BEVS-88US) by hooking it up as a secondary drive on my linux system (Ubuntu 10.10). I used the following command: sudo dd if=/dev/sdc of=/home/backup.img bs=2M This appears to have completed with no errors. I noticed that the file is only 68 GB in size even though the drive is 250 GB in capacity. I restored the image to a spare drive (WD2500BEVS) with the following command: sudo dd if=/home/backup.img of=/dev/sdb bs=2M When I boot the spare drive in the Mac, it appears to start up for a few seconds and then shuts down. (It does not appear to load into the OS at all). When I open up the drive that won't boot in GParted, it looks like this: When looking at the information for the middle partition with the little red exclamation mark, it shows this: The original hard drive that boots ok shows up like this: Further info on both drives: sudo fdisk -l Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 30402 244198580 ee GPT WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 1 30402 244198580 ee GPT So why is my backup or restore failing? Why is dd not creating a byte for byte duplicate?

    Read the article

  • signed applet automatically running as insecure

    - by Terje Dahl
    My application is deployed as a self-signed applet to several thousand users at more than 50 schools across the country (in Norway). The user is presented with the standard Java security warning asking if they will accept the signature. When they do, the applet runs perfectly. However, about half a year ago a group of 7 school, all under a common IT department, stopped getting the security warning. In stead the applet loads and starts running in untrusted mode, without first giving the user an option to accept or reject the signature. The problem is on Windows machines, and only when the machine is connected to the schools network. If they take the same machine home with them, the program functions as it should, with security warnings and everything. I know little about Window systems in general, but I would think it would be some sort of policy-file or something that is loaded when a machine hooks up to/through the schools network. Furthermore, the problem only started occurring in these 7 schools after changes made after a security breach they had a while back. The IT department is stumped. I am stumped. Any thoughts, comments, suggestions?

    Read the article

  • Using the RST3 plugin in the Leo Outliner

    - by T-Boy
    I'm currently trying out the Leo Outliner, and I've heard quite a bit about the RST3 plugin that it has. I'm not planning to use Leo to program just yet -- at this point I'm wondering if it might be useful for generating HTML and PDF documents, as I'm quite currently enamored with RST and how it works. I'm using my Ubuntu Netbook Remix netbook (running 9.10, I believe). I think I've got it down pat, more or less -- I've installed docutils using the Synaptics Package Manager, and I think I've gotten SilverCity installed, as per the requirements -- I've downloaded the archive, and then run "sudo python setup.py install" with no errors. The thing is, I'm not exactly sure how to invoke the rst3 plugin itself. It doesn't appear in the Plugins menu for Leo right now, and the documentation I've managed to source doesn't seem to clearly explain how to use the plugin. Has anyone had any experience in using the rst3 plugin in Leo? It's a little confusing right now, and searches on Google doesn't seem to be helping much any more. I'm using the latest 4.7.1 final version of Leo from the Synaptics Package Manager (was informed that this would have offered the best integration with UNR, so I figured, what the heck). Have I missed out on any steps here, and are there any useful tutorials on how to use the rst3 plugin?

    Read the article

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • USB Hardware vs. Software Write Lock

    - by TreyK
    I'm in the market for a USB flash drive, and remember this cool feature a tiny 32MB flash drive of mine had: a write lock switch. This seemed like it would be an amazing feature to have as a shield against any nastiness happening to the drive on an unfamiliar computer. However, very few drives on the market offer this feature. Instead, it seems that forms of software protection are the more prominent method. This software protection causes me a bit of uneasiness, as it seems like this software wouldn't be nearly as bulletproof as a physical switch. Also, levels of protection seem to vary from product to product. Being able to protect certain folders from reading and/or writing would be nice, but is the security trade-off worth it? Just how effective can this software protection be? Wouldn't a simple format be able to clean any drive with software protection? My drive must also be compatible with Windows XP, Vista, and 7, as well as Linux and Mac. What would be the best way forward for getting a well-sized (~8GB) flash drive with a strong write protection implementation, for little or no more than a regular drive? Thanks.

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >