Search Results

Search found 1952 results on 79 pages for 'stable'.

Page 33/79 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Open source alternative for Canonical Landscape? [on hold]

    - by netvope
    From Canonical: Landscape is an easy-to-use systems management and monitoring service that enables you to manage multiple Ubuntu machines as easily as one through a simple Web-based interface. However, Landscape is not free. The RedHat counterpart Satellite has a free version called Spacewalk, but it doesn't work on Ubuntu. (There is an attempt to port Spacewalk to Debian, but it doesn't look like it's stable yet.) Are there any open source alternative to Landscape? Better yet, are there any Spacewalk-like software that works for both RedHat-based and Debian-based systems?

    Read the article

  • freebsd does not recognise that php was installed via ports

    - by Alistair Prestidge
    I have php 5.2.12 installed on FreeBSD 8.0-STABLE. It was installed from ports and I am trying to upgrade it to 5.3.2. However for some reason my system is not recognising that php was installed via ports. When I run "pkg_version" the list does not include php it does however include all the extensions that I have installed. I have even tried to do "make deinstall" on "/usr/ports/lang/php5" it told me that the port had been deinstalled but php still appears to be working correctly i.e "php -v" works any ideas on how this port has become de attached from the ports system? and how I can get the ports system to recognise that it installed php?

    Read the article

  • Natively boot Virtualbox Image

    - by isync
    I am faced with a Windows hardware/software problem left over from another person. It's on me to resolve. It's a mission critical setup. The situation is: I've got a physical server machine with: -Disk C:\ (one disk) containing a basic install of Windows Server 2008 R2, formerly Win Vista Pro, now gone. -Disk D:\ (software Raid) containing a VirtualBox disk image of a configured Windows Server 2008 R2 running SQL Server R2 among others. What shall I do now? Migrate all the stuff from the configured VM to the basic but natively installed C:\ Windows Server 2008 R2 (with the possibility of breaking stuff)? Or, Setting up the machine to "natively boot" the VM with the help of bcdedit.exe (something I've read about, what I've never done, what I don't know of if it works, if it hits performance, or if it is stable for production) For me, being old skool, I am in the process of de-virtualising everything (option 1). But I'd be happy if someone suggests I am ok to go down the "natively boot" route.

    Read the article

  • Detect and monitor changes to MySQL database

    - by Wolfpack'08
    I use Drupal, and instead of having to use the formal interface to make small changes to a site all the time, I am wondering if there is a way to determine what modifications Drupal makes to the database so that I can make them manually the next time. For example, if I use Drupal’s interface to add a module to the site, I would like to be able to find out what database entries it changes/adds so that if I want to add another module, I can add or change the file or database entry myself instead of having to go through Druapl. This way, I can have better control over the system as well as gain a greater understanding of how it works underneath. My environment is Windows XP, MySQL latest stable, phpmyadmin.

    Read the article

  • How would I / could I obtain an reasonably comprehensive list of domain names?

    - by Simon
    I know that domain names are constantly changing, and I know there are a lot of them, but there is clearly a region of the domain name space which is stable. How would I go about getting a list, even a very big one? Such a thing must logically exist, even if it is in a distributed form, because the web's DNS servers resolve names to IP addresses. So in theory if I could poll all the DNS servers in the world at a moment in time I would have the complete list of mapped names. Is there a practical way of doing that? As an aside, does anyone have any good estimates of how many domain names exist at the moment?

    Read the article

  • NFSv3 Asynchronous Write Depends on Block Size?

    - by Joe Swanson
    I am trying to figure out if my NFSv3 deployment is performing SAFE asynchronous writes. I suspect that it is doing strictly synchronous writes, as I am getting poor performance in general. I used Wireshark to look at the 'stable' flag in write calls, and look for 'commit' calls. I noticed that, with especially large block sizes, writes to appear to be performed asynchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=2097152 count=512 However, smaller block sizes appear to be performed strictly synchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=8192 count=655360 What gives? How does the client decide whether to tell the server to perform writes synchronously or asynchronously? Is there any way I can get smaller block sizes to be performed asynchronously?

    Read the article

  • Compiling the Linux kernel, how much size is needed?

    - by ant2009
    I have downloaded the newest most stable Linux kernel, 2.6.33.2. I thought I would test this using VirtualBox. So I create a dynamically sized harddisk of 4 GB. And installed CentOS 5.3 with just the minimum packages. I setup the make menuconfig with just the default settings. After that I ran make and got the following error: net/bluetooth/hci_sysfs.o: final close failed: No space left on device make[2]: *** [net/bluetooth/hci_sysfs.o] Error 1 make[1]: *** [net/bluetooth] Error 2 make: *** [net] Error 2 The amount of space I have left is: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 3.3G 3.3G 0 100% / /dev/hda1 99M 12M 82M 13% /boot tmpfs 125M 0 125M 0% /dev/shm My virtual size is 4 GB, but the actual size is 3.5 GB. $ ls -hl total 7.5G -rw-------. 1 root root 3.5G 2010-04-13 14:08 LFS.vdi How much size should I give when compiling and installing a Linux kernel? Are there any guidelines to follow when doing this? This is my first time, so just experimenting with this.

    Read the article

  • Random flickering on 2.6.32 kernels after suspend

    - by whitequark
    I have XUbuntu 9.10 installed on a Toshiba NB200 netbook with Intel video card that's handled with i915 driver. With 2.6.31 stable, recommended kernel everything but WiFi works fine: Atheros ath9k WiFi shows too small signal power and loses packets in 'bursts' sometimes. With 2.6.32-* (I tested -9 to -11 from Ubuntu's kernel unstable ppa) everything works fine just prior to first suspend: echo mem >/sys/power/state. After it random unidentified fullscreen 'one-frame' flickering begins in Xorg, and after a couple of minutes everything eventually hangs while showing filled grey (not white; it is like default button colour) screen; no X keys are working: Ctrl+Alt+Fn don't, blind typing in console too. Magic SysRq still works and I was able to reboot. Also there is one out-of-tree kernel module called omnibook that is required to turn on WiFi and Bluetooth. Any advices?

    Read the article

  • Best 2x 4GB RAM for a Thinkpad X200s

    - by Tommy Jakobsen
    Hi, I need 2x 4GB RAM, a total of 8GB, in my upcoming Thinkpad X200s laptop. Before buying, I would like your advice on which modules to choose. I've been looking at Corsair's Value Select (P/N: CM3X4GSD1066) RAM, because in my experience they produce good RAM modules. However, Corsair lists 7 clock cycles for their modules while Lenovo lists 5 clock cycles. What do you think? Is Lenovo modules the best choice? Are they the fastest/most stable, or is it the Corsair modules? Or modules from a third vendor? Thanks in advance!

    Read the article

  • Bookmarks folder on two machines not syncing

    - by AsheeshR
    I have Chromium installed on a Ubuntu 12.04 machine and Chrome on a Windows 7 machine. Both are updated to newest stable release. I am logged in on both machines with the same Google account. I created a folder A on my Windows machine and bookmarked some pages. A few hours later and before the folder appeared, I created a folder named A on the Ubuntu machine and continued adding links to it. Now, both the folders are not syncing and contain links local to each machine only. Is there any way to resolve this?

    Read the article

  • Flushing disk cache for performance benchmarks?

    - by Ido Hadanny
    I'm doing some performance benchmark on some heavy SQL script running on postgres 8.4 on a ubuntu box (natty). I'm experiencing some pretty un-stable performance, even though I'm supposed to be the only one running on the machine (the same script on the exact same data might run in 20m and then 40m for no specific reason). So, remembering my distant DBA training, I decided I should flush the postgres cache, using sudo /etc/init.d/postgresql restart, but it's still shaky! My question: maybe I'm missing some caches in my disk/os? I'm using a netapp appliance as my storage. Am I on the right track? Do I even want to make sure I get repeatable performance before I start tuning?

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • unable to receiving emails to my client.

    - by Karthik Malla
    Hello, I created my own mail server client my domain name is www.softmail.me from this mail client I can able to send emails to any email provider but I cannot receive any emails back. I hosted my client at http://beta.softmail.me do I need to apply settings of a sub domain or domain settings are enough. Kindly check my dns settings and reply me. my DNS details are A (Host) host = @ points = 65.75.241.26 host = beta points = 65.75.241.26 host = accs points = 65.75.241.26 host = mail points = 65.75.241.26 host = stable points = 65.75.241.26 CNAME (Alias) host = imap points = mail host = pop points = mail host = smtp points = mail host = www points = @ MX (Mail Exchange) priority = 10 host = mail points = @ Please verify the above settings and tell me why I am unable to receive emails back from other email providers.

    Read the article

  • ATI video cards - unable to use entire monitor (1080p)

    - by Walter White
    Hi all, I have a Dell s2409w, 24" 1080p monitor. With nVidia, I would plug-in the monitor and voila, it automatically knew it was 1080p (1920x1080). I have both a Windows laptop and Ubuntu laptop. Neither is capable of using the fullscreen even though the monitor reports the input is 1080p. I am connecting the monitors via HDMI, is there a 'special' setting I am missing to make this work? Otherwise, I like the performance of my ATI video cards, the drivers seem to be stable and reliable. Thanks, Walter

    Read the article

  • VirtualBox in production?

    - by MrG
    I'm planning to move a service which is currently powered by Debian into a VirtualBox. That would allow us to easily port it i.e. to a faster machine if required. The setup would be: debian host > Virtual Box #1 > debian instance #1 running Apache & application > Virtual Box #2 > debian instance #2 containing database Do you have any experience with a production setup based on Virtual Box? Is it stable and fast enough? Many thanks!

    Read the article

  • Overclocking a nVIDIA GTX 660M GPU?

    - by heron1000
    I have a MSI GE60 0ND laptop with a GTX 660M GPU. When I play games like Minecraft or Portal 2, the core clock is stable at 835 MHz. Recently I tried to overclock it using MSI Afterburner but it wouldn't let me change the voltages or the clock speed no matter what I tried. Various Google searches yielded solutions that all didn't work. Is there any way I can overclock the GPU? Further Info: I have the nVIDIA 310.70 drivers and Windows 8.

    Read the article

  • Missing APR on apache2 ./configure

    - by arby
    I want to build the latest stable version of apache2. I downloaded the source and put APR & APR-util in the srclib folder, then changed directories to ./srclib/apr and ran: ./configure --prefix=/usr/local/apr sudo make sudo make install This seemed to install APR ok, but when I run ./configure from the apr-util directory, I receive the error: configure: error: APR could not be located. Please use the --with-apr option. Using ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr, the error becomes: checking for APR... configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file. Why can't it find APR?

    Read the article

  • Experiences using VLC for video-on-demand streaming? (VLM)

    - by StackedCrooked
    I'm considering my options for implementing a VOD service. Until recently my choices seemed to be either Wowza or Darwin, but now I discovered VLM, which looks really cool. I am going to stream MPEG4 H.264 video with AAC audio. I'm probably going to use the RTSP protocol, but I'm willing to use HTTP as well (after reading this article). Can anyone comment on his or her experiences with VLM? How does it compare to Darwin or Wowza? Is it stable and worthy of production use? Are there any limitations or performance problems?

    Read the article

  • Commercial version of Freenet6

    - by grnbeagle
    I've been using Freenet6 from gogoNET to make my mobile device publicly accessible via IPv6. It works quite well except that the service is not as stable because it's non-commercial usage only as their servers are hosted for free by different operators around the world. Apparently gogoNET sells hardware called gogoSERVER which allows one to build a service similar to Freenet6. I've inquired their sales team, but they were unable to tell me which companies have a commercial, production-quality implementation of Freenet6. Specifically I'm looking for the following features in IPv6 service provider: Client-based IPv6 connectivity for mobile devices: gogoCLIENT (gw6c) is ideal for mobile devices since it allows a device to go online regardless of the device's location. API for account maintenance: so that we can create device accounts from our software Static IPv6 address: (or maybe I mean IPv4 address) by this I mean, just like gogoNET6 service (username.broker.freenet6.net), we want to provide our users with a permanent URL for their device Any info on commercial IPv6 service provider utilizing gogoSERVER is appreciated. Thanks.

    Read the article

  • What is a good custom MAC address? [closed]

    - by rausch
    My new notebook has been dropping the WiFi connection infrequently. The reason was, that my PS3 had the same MAC address. I changed the MAC address of my notebook and the WiFi is now stable. At first I just reduced the address' last block by 1, which happened to be the MAC address of another device. I reduced it again and for now it's fine. In order to avoid conflicts in the future, is there an address range that is safe to use for custom/non-vendor MAC addresses?

    Read the article

  • OpenAFS on Fedora/CentOS

    - by Michael Pliskin
    I am trying to see if OpenAFS fits my needs as a distributed filesystem and is a bit stuck. There are docs but they're all quite hard to understand, so asking for some expert advice here. My questions: which version to install? I need windows client support so I need 1.5 - right? But it is not stable.. Or is it? And don't see any pre-built rpms for it, so compiling from sources? tried to compile and it worked but it created a non-"mp" kernel module while my kernel needs an mp one - how to workaround that? do I really need a new fresh partition to start with or I can re-use an existing one and just make it available via afp? any nice HOWTOs around?

    Read the article

  • MediaPortal crash when scanning my TV Channels

    - by Martin Ongtangco
    Hello, I tried installing MediaPortal (http://www.team-mediaportal.com) on my old system. I installed & tried both beta & stable releases: Here's the old system specs: Windows XP SP3; 512mb DDR; 80GB IDE; K-World PVR-TV7131 Analog TV-Tuner; Inno3d FX5500 128mb 64bit; When scanning through the TV-Server, it crashes. I don't know if the incompatibility comes from the hardware or the OS, but my TV Tuner's own Media center app works well. For now, that's what im using. Anyone here familiar with MediaPortal? Thanks!

    Read the article

  • Transferring existing files from ext3 to ZFS (on FreeBSD)

    - by peppergrower
    I use an old machine as a file server, for backups, and as a testbed for development. I currently have Debian installed, but I'm very interested in FreeBSD because of ZFS: I really, really like its file integrity features. Before I switch, however, I wanted to ask: what's the best way to migrate my ~400GB of files from the current filesystem (ext3) to ZFS? My number-one requirement is that the migration be absolutely reliable: I don't want to lose any data. (I'll be backing it up before doing this anyway, but still.) My secondary goal is speed: I'd rather not have this take overnight if it doesn't have to. Recommendations? Is FUSE for FreeBSD stable enough to use? What about FreeBSD's native read support for ext3? NFS, maybe? How have you done this?

    Read the article

  • Installing gitlab on Debian 6.0.5

    - by helmus
    I am using following directions in an attempt to install gitlab on Debian 6.0.5 https://github.com/gitlabhq/gitlabhq/blob/stable/doc/installation.md I am getting an error when i'm running following command sudo -u gitlab bundle exec rake gitlab:app:setup RAILS_ENV=production WARNING: #<ArgumentError: Illformed requirement ["#<Syck::DefaultKey:0x00000004b52198> 1.1.4"]> # -*- encoding: utf-8 -*- Gem::Specification.new do |s| s.name = %q{carrierwave} s.version = "0.6.2" s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version= s.authors = ["Jonas Nicklas"] ....more error.... s.add_dependency(%q<mini_magick>, [">= 0"]) s.add_dependency(%q<rmagick>, [">= 0"]) end end WARNING: Invalid .gemspec format in '/usr/local/lib/ruby/gems/1.9.1/specifications/carrierwave-0.6.2.gemspec' Could not locate Gemfile Some pointers to what could cause this would be much appreciated, i have only little experience with RoR and it seems to be related to that.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >