Search Results

Search found 3690 results on 148 pages for 'apt mirror'.

Page 128/148 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • gcc ignores LC_ALL

    - by user332433
    Hi, I was trying to get gcc give error message in a different language. But it still gives me the error message in english. my locale output varun@varun-desktop:$ locale LANG=en_IN LC_CTYPE="es_EC.utf8" LC_NUMERIC="es_EC.utf8" LC_TIME="es_EC.utf8" LC_COLLATE="es_EC.utf8" LC_MONETARY="es_EC.utf8" LC_MESSAGES="es_EC.utf8" LC_PAPER="es_EC.utf8" LC_NAME="es_EC.utf8" LC_ADDRESS="es_EC.utf8" LC_TELEPHONE="es_EC.utf8" LC_MEASUREMENT="es_EC.utf8" LC_IDENTIFICATION="es_EC.utf8" LC_ALL=es_EC.utf8 gcc.mo is present in my /usr/share/local/es i am also getting the error messages for other programs like apt in spanish but not gcc. Can anybody help me in this regard?? I am using gcc-4.4.3 on 64bit ubuntu 10.04 machine thank you

    Read the article

  • Are the svn ruby bindings provided as a gem?

    - by user30997
    I see a couple dozen gems that relate to svn, but what little documentation I can find on any of them shows that they are command-line wrappers and misc helpers. (svn-command, svn-hooks, etc.) I've seen code in the wild that does things like: require 'svn/core' and SVN.Repos.add(...), but the author of that module pulled his svn ruby tools via apt-get. This would not be an option for me, as I'm developing a windows/osx tool. Which gem am I after? From there, I'm happy to dig through code in lieu of documents, but with a call to gem query --name-matches svn --remote returning about 30 hits, I need to narrow it down a bit first.

    Read the article

  • URL regex search and replace on MySQL (in WordPress)

    - by Tal Galili
    Hello all, I have a WordPress blog with numerous URL's I wish to replace from this: http://www.oldwebsite.co.il/name/*.asp To this: http://www.newwebsite.com/?p=* For example, from this: http://oldwebsite.co.il/name/65971.asp To this: http://www.newwebsite.com/?p=65971 I believe the following plugin: http://urbangiraffe.com/plugins/search-regex/ will do the trick with regex, but I am looking for the correct regex to use here. I found this stackoverflow thread that has a similar task, but since I am not too apt with regex, I was hoping for help so I don't mess anything up. Thanks, Tal After searching stackoverflow, I found

    Read the article

  • Ntop monitoring - Hosts visible with no SPAN/mirroring

    - by Cory J
    I am attempting to use ntop to monitor traffic over a Cisco Catalyst switch. I was assuming that in order to see any of the traffic, I'd have to use monitor, as described here: http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a008015c612.shtml. Howver, before I did anything on the switch, I simply plugged my ntop server in and fired up ntop. To my suprise, I instantly see 3+ pages of hosts, and thousands of packets. How is ntop seeing this? I have verified that no monitoring exists on the switch (run as en): cs1.pvdc#show monitor No SPAN configuration is present in the system. My ntop server is Ubuntu 8.04, I haven't done ANY configuration, I just installed the ntop package. This is also a fresh Ubuntu install. Is there anything else on my switch besides "monitor" that might cause my switch to mirror all its traffic like this? I've tried plugging ntop into different ports with the same results. UPDATE: It appears to be more then just broadcast traffic showing up in ntop, for example, I can see when my IPs have talked to the DNS server or generated HTTP traffic. If my switch is misconfigured, can anyone point me in the right direction towards rectify this? Not a Cisco expert.

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • centos 6 debuginfo repository does not have httpd debug version available

    - by Zippy Zeppoli
    I am trying to get the debug version of httpd so I can use it in conjunction with gdb. I am having a hard time getting them, and they don't seem to be in the standard epel-debuginfo repository. What should I do? > [root@buildbox-rhel6 ~]# debuginfo-install httpd Loaded plugins: fastestmirror, presto enabling epel-debuginfo Loading mirror speeds from cached hostfile epel-debuginfo/metalink | 8.3 kB 00:00 * base: mirrors.cicku.me * epel: mirrors.kernel.org * epel-debuginfo: mirrors.kernel.org * extras: mirrors.arpnetworks.com * updates: linux.mirrors.es.net epel-debuginfo | 3.1 kB 00:00 epel-debuginfo/primary_db | 487 kB 00:01 Checking for new repos for mirrors Could not find debuginfo for main pkg: httpd-2.2.15-15.el6.centos.1.x86_64 Could not find debuginfo pkg for dependency package apr-1.3.9-5.el6_2.x86_64 Could not find debuginfo pkg for dependency package apr-util-1.3.9-3.el6_0.1.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package db4-4.7.25-17.el6.x86_64 Could not find debuginfo pkg for dependency package expat-2.0.1-11.el6_2.x86_64 Could not find debuginfo pkg for dependency package openldap-2.4.23-26.el6_3.2.x86_64 Could not find debuginfo pkg for dependency package openldap-2.4.23-26.el6_3.2.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package pcre-7.8-4.el6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package glibc-2.12-1.80.el6_3.6.x86_64 Could not find debuginfo pkg for dependency package libselinux-2.0.94-5.3.el6.x86_64 Could not find debuginfo pkg for dependency package zlib-1.2.3-27.el6.x86_64 No debuginfo packages available to install

    Read the article

  • How does enterprise failover, such as with google.com, actually work?

    - by Alex Regan
    We have a few fedora systems that are configured for web, FTP, and email services. We'd like to mirror these services, so that we can provide near 100% reliability for our users. I'm a fairly experienced Linux administrator, but don't have much experience with redundant systems. What is the best way to do this? How does google and amazon do it? Google.com resolves to multiple IP addresses, but if my local desktop caches one of the IPs that are unreachable, I'm going to get a failed connection message. How do they prevent that from happening? If one of their servers goes down, how is it automatically redirected to another system, without the end-user ever knowing it? I understand there are failover devices, but they're only for failing over the system itself, not a complete network. Let's say we have the worst-case scenario, such as my primary system becomes inaccessible. What are the fundamental components that are used on Linux systems to provide this capability? I'm looking for concepts, or approaches, not answers like "check out openstack". What are the actual pieces that make up the solution? What has to be done to implement this capability? Hopefully my question is clear. I'd like to know what the pieces are that make up a failover system and what approach is taken by successful organizations that implement it. Thanks again, Alex

    Read the article

  • SyntaxError using gdata-python-client to access Google Book Search Data API

    - by isbadawi
    >>> import gdata.books.service >>> service = gdata.books.service.BookService() >>> results = service.search_by_keyword(isbn='0434003484') Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> results = service.search_by_keyword(isbn='0434003484') ... snip ... File "C:\Python26\lib\site-packages\atom\__init__.py", line 127, in CreateClassFromXMLString tree = ElementTree.fromstring(xml_string) File "<string>", line 85, in XML SyntaxError: syntax error: line 1, column 0 This is a minimal example -- in particular, the book service unit tests included in the package also fail with the exact same error. I've looked at the wiki and open issue tickets on Google Code to no avail (and this seems to me more apt to be a silly error on my end rather than a problem with the library). I'm not sure how to interpret the error message. If it matters, I'm using python 2.6.5.

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • how to get trac to run with apache?

    - by ajsie
    i have some problems getting trac to be running with apache. have no idea of how to do and the tutorial i followed doesnt work. i have an empty /etc/apache2/httpd.conf. should it be empty? then i followed the tutorial (http://trac.edgewall.org/wiki/TracModPython) and typed in: LoadModule python_module modules/mod_python.so so now it contains one row. i have ubuntu and i installed mod_python with: apt-get install libapache2-mod-python libapache2-mod-python-doc however, when i run a2enmod mod_python it says: ERROR: Module mod_python does not exist! but i have checked that it exists in /usr/lib/apache2/modules/mod_python.so. so whats the problem?

    Read the article

  • Windows Vista Nested Desktop Folders Problem

    - by Samuel Walker
    I have no idea how, nor when this happened, and it's started to really quite annoy me. When navigating through Explorer, by clicking on Icons I have C:\Users\Samuel\Desktop (Icon is the blue special Desktop icon), which contains the items I see on my Desktop. I then have the following folder: C:\Users\Samuel\Desktop (Icon is the standard yellow folder icon), which contains many program shortcuts, and is completely seperate from the other C:\Users\Samuel\Desktop Then in the Yellow Icon Desktop I have the sub-folder Desktop with the blue icon that is a direct mirror of the blue C:\Users\Samuel\Desktop folder (as in a new folder / file shows up in both). In explorer when I directly type C:\Users\Samuel\Desktop I am taken to the Yellow folder version. If I go to C:\Users\Samuel\Desktop\Desktop I am taken to the Blue folder version. Finally, from cmd cd'ing to C:\Users\Samuel\Desktop takes me to the Yellow folder version whilst C:\Users\Samuel\Desktop\Desktop takes me to the blue folder version. How on earth can I get rid of the yellow folder version leaving the blue C:\Users\Samuel\Desktop. I can't delete either as it says they're in use. UPDATE: Ok, so it looks like doing a dir from cmd lists only one Desktop folder - the Yellow one. In addition, it looks like I can't delete either of them (given that they both contain my 'Desktop'

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • is it worth to use load balancer on web server/website

    - by user427969
    I have a website and a while ago, the web server of the company hosting my website was down for about a day. I consulted the company for a solution on how i can stop this from happening in future and they suggested to have a second machine and which will be connected to my current website/web server by a "load balancer" (at an additional huge cost!!!). The second machine will be replicate of the first one and so if i goes down, the other will always be running. ---- Explanation ----- My hosting company suggested that it will be a good idea to have a second machine running at the same time and both the machines will be connected by a load balancer which reduces the rist of a downtime. The second machine will be a mirror of the first and any changes to first must be replicated in the second. I don't mind spending money if it really saves my website from going down. I want to know is it worth having this "load balancer" for my purpose? My website is a 24/7 service. I cannot afford an outage of 24 hours/1 hour. I don't mind using this "load balancer" as far as it is really worth. I am not sure if its just a marketing trick of my hosting company or really a "best" solution Thanks for help. Regards

    Read the article

  • How does one change the UUID of a Volume on Mac OSX 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • Bridged network on OS X only gets UDP broadcast traffic

    - by a paid nerd
    I've created a bridged network Mac OS X 10.8.5 using ifconfig and TUNTAP for OS X to bridge my wireless connection, en0, with a virtual interface, tap0, which I can use for guest VMs: $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 28:cf:xx:xx:xx:xx inet6 xxxx::xxxx:xxxx:xxxx:xxxx%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.64 netmask 0xffffff00 broadcast 192.168.100.1 media: autoselect status: active bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) However, if I tcpdump -i tap0, I only see broadcast traffic. Shouldn't I see a mirror of everything on en0? (192.168.100.33, the host doing the broadcasting, is another unrelate, noisy server on my LAN.) (I asked a similar question here and will probably close it.)

    Read the article

  • Undefined reference to cmph functions even after installing cpmh library

    - by user1242145
    I am using gcc 4.4.3 on ubuntu. I installed cmph library tools 0.9-1 using command sudo apt-get install libcmph-tools Now, when I tried to compile example program vector_adapter_ex1.c , gcc is able to detect cmph.h library in its include file but is showing multiple errors like vector_adapter_ex1.c:(.text+0x93): undefined reference to cmph_io_vector_adapter' vector_adapter_ex1.c:(.text+0xa3): undefined reference tocmph_config_new' vector_adapter_ex1.c:(.text+0xbb): undefined reference to cmph_config_set_algo' vector_adapter_ex1.c:(.text+0xcf): undefined reference tocmph_config_set_mphf_fd' even though, these are all defined in the source code of the cmph library. Could anyone tell the error that might have occurred or suggest an alternate method to go about building minimal perfect hash functions.

    Read the article

  • installing ruby/gnome2 on ruby1.9

    - by sawa
    My purpose is to install ruby/gnome2 and make it work with ruby1.9 on Ubuntu9.10. I already have ruby/gnome2 working with ruby1.8, but I need to make it work with ruby1.9. I also have ruby1.9 working. When I run within ruby-gnome2-all-0.19.3: ruby1.9 extconf.rb it eventually gives me: Target libraries: glib, gdkpixbuf, pango, atk, gtk, gconf, libglade Ignored libraries: gnomeprintui, panel-applet, gtksourceview, gtksourceview2, bonoboui, bonobo, libart, goocanvas, rsvg, gnomeprint, gstreamer, vte, gnomevfs, poppler, gnomecanvas, gtkglext, gnome, gtkmozembed, gtkhtml2 so it seems some packages failed to be installed. When I look for the log for example for the gnomeprintui part, it exits after returning: cheking for libgnomeprinrui-2.2... no but apt-get says I have the newest version of it. Can anyone tell me how to resolve this problem?

    Read the article

  • Problem building Postgis 1.5.x for Pg 8.4 on Ubuntu 9.10

    - by znik
    Here are things installed: $ sudo apt-get install postgresql-server-dev-8.4 libpq5 libpq-dev Here is a past to my config.out: http://pastebin.com/8Nk6pr96 And, here are some hints I got from IRC (names concealed) < foo> it's NOT failing to find libpq. < foo> libpq is present, but not compilable without adding a boatload of other -l flags < foo> and postgis' configure doesn't let you specify that via LIBS < foo> his paste contains the config.out, which shows this The configure dies with this, configure: error: could not find libpq I intend to install postgis for mapfish :)

    Read the article

  • Backup and Archive Strategy Question

    - by OneNerd
    I am having trouble finding a backup strategy for our code assets that 'just works' without any manual intervention. Goal is to have an off-site backup (a synchronized one) so that when we check-in files, create builds, etc. to the network drive, the entire folder structure is automatically synchronized and backed-up (in real time, or 1x per day) at some off-site location so if our office blows up, we don't lose all of our data. I have looked into some online backup services, but have not yet had any success. Some are quirky/buggy, others limit file size and/or kinds of files (which doesn't work well for developer files). Everything gets checked in and saved to a single server (on a Raid Mirror), so we just need to have a folder on that server backed up/synchronized to some off-site location. So my question is this. What are you using for your off-site backup strategy. What software, system, or service? Is there a be-all/end-all system of backing up your code assets that I just haven't found yet? Thanks

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • Paperclip - Stream not recognized by identify command

    - by user117046
    I'm getting a paperclip error every time that I upload an image: [paperclip] An error was received while processing: #<Paperclip::NotIdentifiedByImageMagickError: /tmp/stream20100531-1921-uvlewk-0 is not recognized by the 'identify' command.> I'm running: Ubuntu 10.04, Imagemagick 6.5.1-0 (via apt-get), Paperclip 3.2.1.1 My path to identify is 'usr/bin/identify' and have confirmed Imagemagick works via command line I've tried putting adding the path to the options, but to no avail. I've tried: Paperclip.options[:command_path] = "usr/bin" or Paperclip.options.merge!(:command_path => "/usr/bin") in environment.rb or config/initializers/paperclip.rb. Though it makes no rational sense, I also tried "usr/local/bin" since this is the default for most people. Any thoughts on getting around this? Thanks!

    Read the article

  • What are the CS fundamentals behind package/dependency management?

    - by Frep D-Oronge
    Often I hear about situations where companies are developing extensable in house software (the dreaded enterprise 'framework') which is supposed to support multiple 'plugins' from diffirent teams. Usually this ends up being a half baked solution that does not really work due to compatibility prolems between addins, or between addins and the framework itself. Usually this means QA have to 'rubber stamp' a global set of versions accross all plugins, or more usually plugins are released and stuff breaks in nasty ways. This problem has been solved before however, for example the package management systems like apt for debian linux. I suspect that the reason it works is that it is built from the start on a known 'Computer Science-y' concept. My question is what is it?

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • Pear SOAP and XAMPP on Ubuntu

    - by Vincent
    All, I have installed xampp for linux on ubuntu 9.10. The installation directory is /opt/lampp. The xampp website says PEAR comes with the installation.. I am relatively new to PEAR and want to know the answers for following: Is PEAR installed with xampp or need to be installed separately using synaptic package manager? I browse to /opt/lampp/bin directory and see "pear" there, but when i type it in the command line, it says "The program 'pear' is currently not installed. You can install it by typing: sudo apt-get install php-pear pear: command not found " I want to use PEAR:SOAP package in my PHP code. How to use that? Do I need to set any paths to the pear in my php.ini? Thanks

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >