Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 54/423 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • SCVMM upgrade scenario

    - by pigeon
    I've read some information on TechNet about upgrading SCVMM 2008 - 2012 but can't quite figure out the best way to approach this. The current setup is that we've got SCVMM 2008 R2 installed but against best practice it was actually installed on the Hyper-V host machine since its a small scale deployment its just a single server setup with SCVMM existing on the same host rather than be in a VM. So from what I've read an in-place should be possible which will incur a restart but also don't have the luxury of another server to shift the VMs onto whilst doing this or want to risk anything happening to the Hyper-V role. Ideally I would probably prefer just to get SCVMM 2012 into a VM of its own and remove the 2008 version from the host machine. Anyone done an upgrade on this or have any recommendations about how to approach this?

    Read the article

  • Limit vsftpd upload to a given set of file-names

    - by Chen Levy
    I need to configure an anonymous ftp with upload. Given this requirement I try to lock this server down to the bear minimum. One of the restrictions I wish to impose is to enable the upload of only a given set of file-names. I tried to disallow write permission to the upload folder, and put in it some empty files with write permission: /var/ftp/ [root.root] [drwxr-xr-x] |-- upload/ [root.root] [drwxr-xr-x] | |-- upfile1 [ftp.ftp] [--w-------] | `-- upfile2 [ftp.ftp] [--w-------] `-- download/ [root.root] [drwxr-xr-x] `-- ... But this approach didn't work because when I tried to upload upfile1, it tried to delete and create a new file in its' place, and there is no permissions for that. Is there a way to make this work, or perhaps use a different approach like abusing the deny_file option?

    Read the article

  • Easily recreate a server's "state" [closed]

    - by Brandon Wamboldt
    I want the ability to setup new servers for dev/testing/prod very easily. The reasons for being able to setup a new dev VM is obvious, but for prod my concern is adding a new production server/migrating to a new server. I assume a traditional backup solution won't work as hardware may be different so the binaries/config might be different. I want to get experience with puppet anyways, so I was thinking about creating a manifest that would setup my users, install Postgres, Nginx, PHP-FPM, etc, and configure them the way I specify. Then I could install puppet on a new server, copy down my manifest and apply it locally. This would make keeping my server configs in sync easier too. Is there a better approach I'm not aware of, and does my approach have any pitfalls?

    Read the article

  • How stable is zfs-fuse 0.6.9 on Linux?

    - by Mavrik
    I'm thinking of using ZFS for my home-made NAS array. I would have 4 HDDs in raidz on a Ubuntu Server 10.04 machine. I'd like to use the snapshot capability and dedup when storing data. I'm not so much concerned about the speed, since the machine is accessed via N wireless network and that is probably going to be the bottleneck. So does anyone have any practical experience with zfs-fuse 0.6.9 on such (or simillar) configuration?

    Read the article

  • check if a domain is blacklisted / blocked

    - by Henry
    Some clients report to us that our site is not accessible through their internet connection. We suspect our site is wrongfully blocked by some security software/firewall/public blacklist. How can we verify that, other than trying them one by one? There are so many security software out there that it is not practical... Thx

    Read the article

  • --prefix to /usr/local or /opt?

    - by Paul Alexander
    For building apps from source like git or rails I've seen recommendations to install in both /opt or /usr/local. From what I've read so for, the designated use for both is about the same and it amounts to merely a style issue. Is there any practical difference? Best practices?

    Read the article

  • How to add wildcards to Linux Malware Detect ignore_paths

    - by Laurence Cope
    I am using Linux Malware Detect to scan and report on malware, but on a daily basis I receive alerts for malware in users emails (mainly spam folder). I do not want alerts for this, the spam folders are cleaned often, and the users may clean it also. I tried adding wildcards into /usr/local/maldetect/ignore_paths as follows but they are not ignored: /home/*/homes/*/Maildir /home/?/homes/?/Maildir Does anyone know how to exclude folders using wildcards, as it would not be practical to add the full path of every users mail directory. Thanks

    Read the article

  • What is the difference between bash and sh

    - by Saif Bechan
    In using i see 2 types of code #!/usr/bin/sh and #!/user/bin/bash I have Googled this and the opinions vary a lot. The explanation I have seen on most websites is that sh is older than bash, and that there is no real difference. Does someone know the difference between these and can give a practical example when to use either one of them. I highly doubt that there is no real difference, because then having to things that do the exact same thing would be just

    Read the article

  • How to download Firefox extensions from addons.mozilla.org without installing them?

    - by kjo
    Pages at the https://addons.mozilla.org/en-US/firefox site often feature buttons that say "Add to Firefox". Clicking on such a button causes a Firefox extension to be downloaded and installed. I am looking for a convenient way to limit this action to the download step only, so that in the end I am left with the downloaded *.xpi file in my disk. Thanks! P.S. The following approach is not only inconvenient: it doesn't work!. Inspect the HTML for the button, and extract a URL like https://addons.mozilla.org/firefox/downloads/latest/1234/addon-1234-latest.xpi?src=search give or take the stuff after .xpi. at the command-line prompt, download this URL with wget or curl. This download attempt just hangs. (Even if it didn't, I'd like to find a less cumbersome approach.)

    Read the article

  • Hyper-V cluster VS regular cluster

    - by Sasha
    We need to choice between Hyper-V and regular cluster technologies. What is the advantage and disadvantage of these approaches? Update: We have to physical servers and want to build reliably solution using cluster approach. We need to clustering our application and DB (MS SQL). We know that we can use: Regular Windows Cluster Service. Application and DB will be migrating from one node to other. Hyper-V Failover Cluster. Virtual machine will be migrating from one node to other. Combined variant. DB mirroring for MS SQL and Hyper-V for our application. We need to make a choice between this approach. So we need to know advantage and disadvantage of these approaches?

    Read the article

  • Upgrading drives on a MD3000

    - by Anonymouse
    Hello, Our MD3000 array is getting full as our databases are growing and we need more spaces. Currently, we use a MD3000 with a two-servers Windows 2003 cluster and 15x 73GB SAS drives. Disk groups are configured in RAID1 of two drives. The approach we are currently investigating is simply swapping the existing SAS drives with bigger ones (300GB instead of 73GB), one at a time, and let each RAID1 array rebuilt. Is it a good approach? Will we be able to resize the array afterwards? Will we be able to resize the partitions afterwards? Can the Dell M3000 Management software do it or will we have to bring the server offline and use some partition software to do it? Thanks in advance.

    Read the article

  • use i3 tiling window manager in RHEL 5

    - by Peter Hamilton
    For some time I have been using the i3 tiling window manager in Ubuntu. However at my new company we use RHEL5. I would dearly love to port over all my configs but Im having some trouble... An initial (naive) approach seems that a simple yum install i3 yields no results for i3. I then used some additional rpm repos by following instructions to add the EPEL RPM Repositories but it seems i3 is only bundled for RHEL = 6. Damn. I'm fairly sure that this must be possible but I'm pretty new to the Redhat scene and am not sure how to approach this problem. Any pointers would be gratefully received!

    Read the article

  • iSCSI: LUNs per target?

    - by badnews
    My question relates specifically to ZFS/COMSTAR but I assume is generally applicable to any iSCSI system: Should one prefer to create a target for every LUN that you want to expose? Or is it good practise to have a single target with multiple LUNs? Does either approach have a performance impact? And is there some crossover point where the other approach makes sense? The use case is for VM disks, where each disk (zvol) is a LUN. So far we have created a a separate target for each VM; but a single target that contains all the LUNs would probably greatly simplify management... but we may need hundreds of LUNs per a single target. (And then possibly tens of initiator connections to that target)

    Read the article

  • Ebook stamper for ePub and/or Kindle formats?

    - by Nick Martin
    I've published an ebook in Adobe Acrobat PDF format. I sell this ebook DRM free and take what I consider a friendlier/less obtrusive approach of using a service to "stamp" the customer's name and email address onto each page of the ebook as a way to discourage piracy. I would like to take this same approach for selling the ebook in ePub and/or Kindle formats. Unfortunately, I haven't been able to find any stamping services for ePub or Kindle. Is DRM my only anti-piracy option when using ePub and Kindle? For a reference point, ebookstamper.com stamps ebooks in PDF format. No, they don't do anything other than PDF.

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB, shudder) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • Self-healing Cloud vs Failover Boxes

    - by IMB
    Now that self-healing cloud servers are becoming more and more popular, I am currently torn between the decision if I should setup a HAproxy failover for my VPS or if should save myself the trouble and just put my sites on a self-healing cloud server. Does it still make sense to setup your own failover system (HAproxy + 2 or more servers for example) when self healing cloud seems like a practical solution? They seem to do the same job or am I missing something?

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • How do I slow down a video generating new interpolated frames?

    - by Grzegorz Adam Hankiewicz
    I've used mencoder's speed parameter to generate a video which is played at half the speed. This basically means halving the framerate of the video. But I'm interested in software that could convert a 30fps video to another 30fps video with half the frames interpolated, maybe using the motion information stored in the video stream per se. I think this is called intra-frame interpolation, but I haven't found anything practical other than research papers. Any pointers to such software?

    Read the article

  • Is there a way to bundle pdf tiles to a Kindle friendly file?

    - by Maciej Swic
    I'm downloading PDF approach plates from Navigraph, and i have a folder per airport with files named after their corresponding approach / departure etc. Now I'd like to take such a folder with a bunch of PDF files, automatically generate an index and combine them to a single .mobi file that i can send to my Kindle. The index created can be very simple and consist of the file name (without the extension). Tapping an index item should jump to the correct page for that chart. I know there is a host of apps that combine comic book jpg's to ebooks, but is there anything that does the above please?

    Read the article

  • Windows Vista - behavior with RAM minimum

    - by benc
    I am considering creating a Vista install, in VirtualBox, in a minimum configuration. The Microsoft pages seem consistent on the most important points: 1 GB of system memory 40 GB hard drive with at least 15 GB of available space 128 MB of graphics memory (minimum) In my experience sometimes these recommendations are accurate. (For example, I have a Windows XP in VirtualBox that generally behaves well running in the stated minimum). If anyone has some practical experience (especially the type that could save me hours of pain), please let me know.

    Read the article

  • Creating a DJVU file from scanned BMPs

    - by Freddie Witherden
    Recently I have been scanning some of my hand written notes. Each page ends up as a .bmp file (300 DPI A4). I wish to combine/compress these too a DJVU file for easy reading. Hence, Does anyone know of any programs/utilities for OS X/Linux that can do this. If so, what settings can be considered optimal (lined paper, some coloured ink used). Are there any practical means of tagging pages/regions (to create an outline).

    Read the article

  • How to use 2 or more internet connections on the same network?

    - by Rogue
    Living in a joint family we have 3 internet connections, each floor has one internet connection which is split per floor. Each internet connection is shared between 4-5 computers using a switch per floor. Each of these internet sharing networks are independent of each other. What I want to achieve here is a local network (for local messenger and file sharing) that can combine all the 3 independent networks, problem is that whenever I try to do that the whole network tends to use just one internet connection. I have all the necessary hardware. How do I solve this problem, one approach could be that I could get one PC to act as a server and bridge the internet connections and then the whole network would have to access the internet through this server. Theoretically this could be possible but I have never tried this approach in real life. Also if certain computers need be restricted from internet access how would this be possible on the same network?

    Read the article

  • Optimal dir strcuture for keeping millions of files on an ext4 system

    - by Alex Flo
    I need to keep millions of files on an ext4 system. I understand that having a structure with multiple subdirectories is the general accepted solution. I wonder what would be the optimal approach in terms of number of dirs/subdirs. For example I tried a structure like 16/16/16/16 (that is, (sub)directories from 1 to 16) and I found that I am able to move 100K files to this structure in 2m50s. When trying to move 100K files to a 8/8/8/8/8/8 structure it took 11 minutes. So the 16/16/16/16 approach seems to be better but I was wondering if anyone has some empirical experience with an even better dir/subdir distribution.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >