Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 379/465 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • Apache: Isn't chmod 755 enough to set up symlink or alias on Apache httpd on Mac OS 10.5?

    - by eed3si9n
    On my Mac OS 10.5 machine, I would like to set up a subfolder of ~/Documents like ~/Documents/foo/html to be http://localhost/foo. The first thing I thought of doing is using Alias as follows: Alias /foo /Users/someone/Documents/foo/html <Directory "/Users/someone/Documents/foo/html"> Options Indexes FollowSymLinks MultiViews Order allow,deny Allow from all </Directory> This got me 403 Forbidden. In the error_log I got: [error] [client ::1] (13)Permission denied: access to /foo denied The subfolder in question has chmod 755 access. I've tried specifying likes like http://localhost/foo/test.php, but that didn't work either. Next, I tried the symlink route. Went into /Library/WebServer/Documents and made a symlink to ~/Documents/foo/html. The document root has Options Indexes FollowSymLinks MultiViews This still got me 403 Forbidden: Symbolic link not allowed or link target not accessible: /Library/WebServer/Documents/foo What else do I need to set this up? Solution: $ chmod 755 ~/Documents In general, the folder to be shared and all of its ancestor folder needs to be viewable by the www service user.

    Read the article

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Dealing with upgrade of libevent on Amazon AWS

    - by Dreen
    I am building an application (in Python) on Amazon EC2 that has a following dependency chain: gevent-websocket ---> gevent ---> libevent The last one (libevent) got upgraded on Sunday and my server is now generating this error: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: libevent-1.4.so.2: cannot open shared object file: No such file or directory Not wanting to spend much time on the issue, I tried to mitigate it by creating a symlink to an always-recent version: $ sudo ln -s /usr/lib64/libevent.so /usr/lib64/libevent-1.4.so.2 But it didn't quite work: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: /usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/core.so: undefined symbol: current_base I am a bit stumped as to how to proceed. Should I create more symlinks? To what? Or is there a better way to solve this problem... PS. For the record I am using Amazon AMI.

    Read the article

  • ubuntu's average load never below "0.00 0.01 0.05"

    - by Karma Fusebox
    I have several ubuntu 12.04 VMs running on a ubuntu 12.04 KVM host. Those of the virtual machines that are totally idle with no services (except syslog and the other "small" standard stuff of a fresh installation) show a constant load of "0.00 0.01 0.05" in top/htop as average 1/5/15. When there are "real" applications running, the load averages behave perfectly normal but they never fall below the mentioned values. While this doesn't affect performance at all and could easily be ignored, it screws up the monitoring graphs in a very annoying way: (Notice how load15 behaves nicely if 0.05 for a short time in the right half of the pic) Unfortunately I don't know what diagnostic outputs might be helpful for you, so here's some default stuff: # top top - 16:31:01 up 1:05, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019464k total, 73452k used, 946012k free, 6140k buffers Swap: 0k total, 0k used, 0k free, 22504k cached . # free -m total used free shared buffers cached Mem: 995 72 923 0 6 21 -/+ buffers/cache: 43 951 Swap: 0 0 0 . # iostat -x /dev/vda Linux 3.2.0-32-virtual (vm3) 11/15/2012 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.25 0.00 0.65 0.20 0.24 98.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.14 0.12 0.51 0.22 6.74 1.46 22.50 0.02 23.26 20.64 29.30 7.63 0.56 Need something else? Has anyone ever seen this behavior? Might this be a bug in kvm/ubuntu/kernel 3.x in the end? Thanks a lot!

    Read the article

  • Software raid 0 with six disks performance

    - by user134880
    I have some problems with disk performance. I have 6 x WD 500Gb RE4 disks. Each disk gives 135Mb/sec throughput. All measurements are made with hdparm with options "-tT" (I know that it is just synthetic test, but I need some start point to make measurements). I have controller with Sil3124 x 4 ports PCI Express 1x So... RAID0 on controller with 2 disks gives 200Mb/s - ok, pcie limit. RAID0 on motherboard with 2 disks gives 270Mb/s - niceeee :) RAID0 on contorller with 4 disks gives 200Mb/s - ok, pcie limit. RAID0 on controller with 4 disks + 1 disks on motherboard = 340Mb/s ... :( RAID0 on controller with 4 disks + 2 disks on motherboard = 300Mb/s .... why? Any ideas? Maybe need more cpu power? Now there is Pentium D Dual core 2.8Ghz, 4Gb RAM. It is dedicated box for storage.. no other activity.

    Read the article

  • Is there a limit to how many sites can be hosted on a single IP address when using HTTP Host Headers on Windows 2008?

    - by Kev
    For reasons that are lost in the mists of time, our older Windows (2000, 2003) servers have been configured with a "Administrative" IP address and three further "Hosting" IP addresses. There are also additional IP's for sites with SSL certificates. The "Administrative" IP address is where all our internal provisioning, monitoring and other such apps are bound to. We lock this down and don't permit access to it from the outside world (other than over our VPN). The three "Hosting" IP addresses are used for IIS website hosting (in conjunction with host headers). Historically, new site IP address allocations have been rotated through these three IP addresses. I'm not really sure why. I'm building a new batch of servers and I'm considering just having a single hosting IP address. Our servers can host up to 1200 sites on a single machine. Is there a technical limit to the number of IIS sites that can bind to a single IP address? Our Linux platform seems to do just fine with just a single shared IP + host headers. I initially thought this might be an SEO thing, but given that IPv4 address space conservation is paramount I hardly think Google or other search engines could reasonably penalise site rankings just because hundreds of sites hang off the same IP.

    Read the article

  • How to safely send newsletters on VPS (SMTP) w/ non-hosted domain as "From" email?

    - by Andy M
    Greetings, I'm trying to understand the safest way to use SMTP. I'm considering purchasing a second virtual server mainly for email sending, on which I will set up PHPlist (a free open-source mailing program), so we have the freedom to send unlimited newsletters (...well, 10,000 per day at least, which requires a VPS rather than shared hosting). Here's my current setup with a paid mass-mailing software: I have a website - let's call it MyHostedDomain.org. I send newsletters with the From / Reply To address as [email protected], which isn't being hosting by me but I have access to the email account. Can I more or less safely set this up with an SMTP server on a VPS? i.e. send messages using [email protected] as the visible address, but having it all go through my VPS SMTP? I cannot authenticate it, right? Is this too risky a practice? Is my only hope to use an address with a domain on the VPS, i.e. [email protected]? I already have a Reverse DNS record for the domain hosted on my current VPS. I also see other suggestions, like SenderID and DKIM. But with all these things combined, will this still work? I don't want to get blacklisted, but the good thing is this is a somewhat private list, and users opt-in to subscribe. So it's a self-made audience. (If it makes you feel better, this is related to a non-profit activity, not some marketing scam...it's for a good cause, I assure you!)

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • What kind of hosting do I need? [closed]

    - by Robert Smith
    I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have custom made forum (rather than a built-in forum like the ones you can find as plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. Thanks a lot.

    Read the article

  • Connecting SVN from Remote Server

    - by Ashish
    I have hosted my repository in assebbla & it works fine. now I want to write a script that can automate the build process : 1. Take the code from assembla repository 2. Make a dump and copy it onto my web server. what I have researched from net states that use of commands like svn co svn+ssh://[email protected]/home/svn/test I believe I need to open Shell on my server and type these commands but shell has been disabled from my server admin. I tried to run the same from php using exec , admin has disabled that too. (am using shared hosting and want to do a automated deployment using these simple steps. i don't want to bring my local system in this process) now am not sure even if I get the shell access open to my server these commands like svn will work there as I don't have SVN installed on my server (its installed on assembla). kindly let me know if any more explanation is required regarding the same or if am going on the wrong track. Am a newbie so please be descriptive in answering :) Thanx in advance Ace

    Read the article

  • What diagnostics are safe to run on an SSD drive?

    - by Peter Mounce
    I have a MacBook Pro (late 2010) with a Crucial RealSSD 256Gb in it; 60Gb is given to the Windows 7 x64 BootCamp partition. I have a USB-attached 500Gb drive for (most) data. In the last day or so, I've had a BSOD and several OS freezes (both Mac OSX 10.6.6 and Win7). The system in both cases will boot fine (at the moment!) and then run things fine, then some time later a program will stop responding, followed shortly thereafter by the system as a whole, forcing a reboot. This smacks to me of a storage problem. Given that I have an SSD and not a regular magnetic HDD, what are my next steps, in both OS'? I haven't seen anything pertinent in Windows' event-log. I'm not sure of the equivalent place to look in OSX; it's never given me issue to find out. What are my options for attempting to save my data from the SSD to another drive, given that after some small amount of time (eg half an hour), the OS stops responding? What are the recommended next steps?

    Read the article

  • Windows 7 SSH file server

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • Sync, share and backup policy using NAS

    - by Cue
    Trying to come up with a way to keep in sync while sharing and keeping a backup of my music/photos and movies. Currently I have an iMac in Greece and a MBP with me in the UK. As a result I've ended up with 2 iPhoto and iTunes libraries not to mention Documents scattered here and there, user settings etc. I also like to have a backup in case of a drive failure or the need to clean install. It seems that iPhoto and iTunes don't work really well with networked libraries. The way I think about it is to have a NAS where I keep my iTunes and iPhoto library but also rsync daily to my MBP to have a local copy. That way my files are shared across the network as well as act like a backup. In addition I get to have my files wherever I take my MBP but also have the ability to clean install. The tricky part comes from keeping in sync the iMac which is miles away. Again I'm considering a mirror setup (NAS, rsync to the iMac) as well as an rsync between the two NAS. It pretty much resembles the way Dropbox works, sans the requirement to go through their servers but I'm no "superuser" and don't really know if it is even feasible to have such a setup. Looks like there are so many things that can go wrong.

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • How do i play nicely with MS SQL/SQL Server 2008

    - by acidzombie24
    Big problem. I have nearly given up. I am trying to port my prototype to use MS SQL so it will work on a server once i get it (the server will be SQL Server 2008, shared, i dont know any more info). So i tried to connect to SQL Server via visual studios IDE and had no luck. I enabled TCP and named pipes and restarted the service (and computer) with still no luck. I remembered about mdf files so i made that after an obstacle of not being able to make the connect string require i figure out visual studio has it in its properties and successfully connected with that. Then i had a problem with nested transactions. After not being able to figure out how to check i wondered if i can configure it to allow it somehow. I always thought all of MS were the same except for limitations but sql server seems to support nested transactions so theres no point trying to work around the problem with .mdf files since i wont need them and really just used it to port the base of my sql code and to check if syntax is correct. I tried installing SQL Server Management Studio since people mentioned it several times (as a solution or at least help). When installing it on windows 7 it says it may not be compatible. After running it, it launched SQL Server Installation Center (64-bit) which doesnt seem to be the same thing as i dont see a way to modify any of my server (networking) configurations or edit user permissions, etc. I am clueless what to do next. Does anyone have any ideas? I'm posting here bc i think my problem is more configurations and sql server then programming.

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • Azure Virtual Machines - what fault tolerance do they provide?

    - by Borek
    We are thinking about moving our virtual machines (Hyper-V VHDs) to Windows Azure but I haven't found much about what kind of fault tolerance that infrastructure provides. When I run VHD in Azure, I've got two questions: Is my VHD and all the data in it safe? I think that uploaded VHDs use the "Storage" infrastructure so they should be automatically replicated to multiple disks and geographically distributed but should I still make a full-image backup just to be safe? (Note that of course I will be backing up the actual data inside VMs that I care about; I just want to know if there is a chance greater than 0.0000001% that one day I will receive an email from Microsoft telling me that my VM is gone and that I should create or restore it from scratch). Do I need to worry about other things regarding the availability of my VMs? I mean, when I have an on-premise server I need to worry about the hardware itself, about the host operating system, what would happen if my router failed, if my Hyper-V's C: drive failed etc. Am I right in thinking that with Azure, their infrastructure takes care of all of this? Thanks.

    Read the article

  • Mapped network drive connection timeout

    - by Terix
    I have server "Alpha" and server "Beta". Server Beta has a shared folder, that is mapped on server Alpha as "X:" On server Alpha there is a .vbs script that runs and take some files on local drive and copy them on X: drive. My issue is if no user log on server Alpha for a long time, it seems like the tcp connection underneath the mapped drive has a timeout, and the vbs script fails on the copy of file. As soon I log with remote desktop on Alpha server, the .vbs is successfull on the copy of the files. I have made many tests, using file logs to check what was happening and I found no way to refresh the connection and let the .vbs be able to copy files unattended.. I have always to log with remote desktop on Alpha server to refresh the connection and let the .vbs copy the files without issues. What can I do to avoid to log every time? The .vbs script runs 3 times a day and is very annoying. I do not have control over server Beta so I cannot change anything there, and I am very limited on changes I can do on server Alpha ( I cannot change registry and that sort of things)

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >