Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 41/371 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • SCVMM upgrade scenario

    - by pigeon
    I've read some information on TechNet about upgrading SCVMM 2008 - 2012 but can't quite figure out the best way to approach this. The current setup is that we've got SCVMM 2008 R2 installed but against best practice it was actually installed on the Hyper-V host machine since its a small scale deployment its just a single server setup with SCVMM existing on the same host rather than be in a VM. So from what I've read an in-place should be possible which will incur a restart but also don't have the luxury of another server to shift the VMs onto whilst doing this or want to risk anything happening to the Hyper-V role. Ideally I would probably prefer just to get SCVMM 2012 into a VM of its own and remove the 2008 version from the host machine. Anyone done an upgrade on this or have any recommendations about how to approach this?

    Read the article

  • Limit vsftpd upload to a given set of file-names

    - by Chen Levy
    I need to configure an anonymous ftp with upload. Given this requirement I try to lock this server down to the bear minimum. One of the restrictions I wish to impose is to enable the upload of only a given set of file-names. I tried to disallow write permission to the upload folder, and put in it some empty files with write permission: /var/ftp/ [root.root] [drwxr-xr-x] |-- upload/ [root.root] [drwxr-xr-x] | |-- upfile1 [ftp.ftp] [--w-------] | `-- upfile2 [ftp.ftp] [--w-------] `-- download/ [root.root] [drwxr-xr-x] `-- ... But this approach didn't work because when I tried to upload upfile1, it tried to delete and create a new file in its' place, and there is no permissions for that. Is there a way to make this work, or perhaps use a different approach like abusing the deny_file option?

    Read the article

  • Easily recreate a server's "state" [closed]

    - by Brandon Wamboldt
    I want the ability to setup new servers for dev/testing/prod very easily. The reasons for being able to setup a new dev VM is obvious, but for prod my concern is adding a new production server/migrating to a new server. I assume a traditional backup solution won't work as hardware may be different so the binaries/config might be different. I want to get experience with puppet anyways, so I was thinking about creating a manifest that would setup my users, install Postgres, Nginx, PHP-FPM, etc, and configure them the way I specify. Then I could install puppet on a new server, copy down my manifest and apply it locally. This would make keeping my server configs in sync easier too. Is there a better approach I'm not aware of, and does my approach have any pitfalls?

    Read the article

  • How to download Firefox extensions from addons.mozilla.org without installing them?

    - by kjo
    Pages at the https://addons.mozilla.org/en-US/firefox site often feature buttons that say "Add to Firefox". Clicking on such a button causes a Firefox extension to be downloaded and installed. I am looking for a convenient way to limit this action to the download step only, so that in the end I am left with the downloaded *.xpi file in my disk. Thanks! P.S. The following approach is not only inconvenient: it doesn't work!. Inspect the HTML for the button, and extract a URL like https://addons.mozilla.org/firefox/downloads/latest/1234/addon-1234-latest.xpi?src=search give or take the stuff after .xpi. at the command-line prompt, download this URL with wget or curl. This download attempt just hangs. (Even if it didn't, I'd like to find a less cumbersome approach.)

    Read the article

  • Hyper-V cluster VS regular cluster

    - by Sasha
    We need to choice between Hyper-V and regular cluster technologies. What is the advantage and disadvantage of these approaches? Update: We have to physical servers and want to build reliably solution using cluster approach. We need to clustering our application and DB (MS SQL). We know that we can use: Regular Windows Cluster Service. Application and DB will be migrating from one node to other. Hyper-V Failover Cluster. Virtual machine will be migrating from one node to other. Combined variant. DB mirroring for MS SQL and Hyper-V for our application. We need to make a choice between this approach. So we need to know advantage and disadvantage of these approaches?

    Read the article

  • Upgrading drives on a MD3000

    - by Anonymouse
    Hello, Our MD3000 array is getting full as our databases are growing and we need more spaces. Currently, we use a MD3000 with a two-servers Windows 2003 cluster and 15x 73GB SAS drives. Disk groups are configured in RAID1 of two drives. The approach we are currently investigating is simply swapping the existing SAS drives with bigger ones (300GB instead of 73GB), one at a time, and let each RAID1 array rebuilt. Is it a good approach? Will we be able to resize the array afterwards? Will we be able to resize the partitions afterwards? Can the Dell M3000 Management software do it or will we have to bring the server offline and use some partition software to do it? Thanks in advance.

    Read the article

  • use i3 tiling window manager in RHEL 5

    - by Peter Hamilton
    For some time I have been using the i3 tiling window manager in Ubuntu. However at my new company we use RHEL5. I would dearly love to port over all my configs but Im having some trouble... An initial (naive) approach seems that a simple yum install i3 yields no results for i3. I then used some additional rpm repos by following instructions to add the EPEL RPM Repositories but it seems i3 is only bundled for RHEL = 6. Damn. I'm fairly sure that this must be possible but I'm pretty new to the Redhat scene and am not sure how to approach this problem. Any pointers would be gratefully received!

    Read the article

  • iSCSI: LUNs per target?

    - by badnews
    My question relates specifically to ZFS/COMSTAR but I assume is generally applicable to any iSCSI system: Should one prefer to create a target for every LUN that you want to expose? Or is it good practise to have a single target with multiple LUNs? Does either approach have a performance impact? And is there some crossover point where the other approach makes sense? The use case is for VM disks, where each disk (zvol) is a LUN. So far we have created a a separate target for each VM; but a single target that contains all the LUNs would probably greatly simplify management... but we may need hundreds of LUNs per a single target. (And then possibly tens of initiator connections to that target)

    Read the article

  • Ebook stamper for ePub and/or Kindle formats?

    - by Nick Martin
    I've published an ebook in Adobe Acrobat PDF format. I sell this ebook DRM free and take what I consider a friendlier/less obtrusive approach of using a service to "stamp" the customer's name and email address onto each page of the ebook as a way to discourage piracy. I would like to take this same approach for selling the ebook in ePub and/or Kindle formats. Unfortunately, I haven't been able to find any stamping services for ePub or Kindle. Is DRM my only anti-piracy option when using ePub and Kindle? For a reference point, ebookstamper.com stamps ebooks in PDF format. No, they don't do anything other than PDF.

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB, shudder) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • Is there a way to bundle pdf tiles to a Kindle friendly file?

    - by Maciej Swic
    I'm downloading PDF approach plates from Navigraph, and i have a folder per airport with files named after their corresponding approach / departure etc. Now I'd like to take such a folder with a bunch of PDF files, automatically generate an index and combine them to a single .mobi file that i can send to my Kindle. The index created can be very simple and consist of the file name (without the extension). Tapping an index item should jump to the correct page for that chart. I know there is a host of apps that combine comic book jpg's to ebooks, but is there anything that does the above please?

    Read the article

  • How to use 2 or more internet connections on the same network?

    - by Rogue
    Living in a joint family we have 3 internet connections, each floor has one internet connection which is split per floor. Each internet connection is shared between 4-5 computers using a switch per floor. Each of these internet sharing networks are independent of each other. What I want to achieve here is a local network (for local messenger and file sharing) that can combine all the 3 independent networks, problem is that whenever I try to do that the whole network tends to use just one internet connection. I have all the necessary hardware. How do I solve this problem, one approach could be that I could get one PC to act as a server and bridge the internet connections and then the whole network would have to access the internet through this server. Theoretically this could be possible but I have never tried this approach in real life. Also if certain computers need be restricted from internet access how would this be possible on the same network?

    Read the article

  • Optimal dir strcuture for keeping millions of files on an ext4 system

    - by Alex Flo
    I need to keep millions of files on an ext4 system. I understand that having a structure with multiple subdirectories is the general accepted solution. I wonder what would be the optimal approach in terms of number of dirs/subdirs. For example I tried a structure like 16/16/16/16 (that is, (sub)directories from 1 to 16) and I found that I am able to move 100K files to this structure in 2m50s. When trying to move 100K files to a 8/8/8/8/8/8 structure it took 11 minutes. So the 16/16/16/16 approach seems to be better but I was wondering if anyone has some empirical experience with an even better dir/subdir distribution.

    Read the article

  • Subdomains, folders, internationalization, and hosting solutions

    - by justinbach
    I'm a web developer and I recently landed a gig to develop the US / international version of a site for a company that's big in Europe but hasn't done much expansion into the US yet. They've got an existing site at company.com, which should remain visible to European customers after the new site goes up, and an existing (not great) site at company.us, which I'm going to be redeveloping (the .us site will be taken down when my version goes up--keep reading for details). My solution needs to take into account the fact that there are going to be new, localized versions of the site in the fairly near future, so the framework I'm writing needs to be able to handle localizations fairly easily (dynamically load language packs, etc). The tricky thing is the European branch of the company manages the .com site hosting (IIS-based) and the DNS, while I'll be managing the US hosting (and future localizations), which will likely be apache-based. I've never been a big fan of the ".us" TLD--I think most US users are accustomed to visiting the .com--so the thought is that the European branch will detect the IP of inbound traffic and redirect all US-based addresses to us.example.com (or whatever the appropriate localized subdomain might be), which would point to the IP address of my host. I'd then serve the appropriate locale-specific content by pulling the subdomain from the $_SERVER superglobal (assuming PHP). I couldn't find any examples of international organizations that take a subdomain-based approach for localization, but I'm not sure I have any other options as a result of the unique hosting structure here (in that there's not a unified hosting solution for the European and US sites). In my experience, the US version of an international site would live at domain.com/us, not at us.domain.com, and I'd imagine that this has to do with SEO (subdomains are treated as separate sites, so improved rankings for the US site wouldn't help the Canadian version if subdomains are used to differentiate between them). My question is: is there a better approach to solving this problem than the one I'm taking? Ideally, I'd like to use a folder-based approach (see adidas.com as an example of what I'm talking about), but I'm not sure that's a possibility given that the US site (and other localizations) will not be hosted on the same server as the rest of the .com. Can you, in IIS, map a folder (e.g. domain.com/us) to a different IP address? What would you recommend? Thanks for your consideration.

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • How to I create a table of contents in a Word document that has a mind of it's own?

    - by Howiecamp
    I'm embarrassed to admit that I'm struggling to get a table of contents going in a Word doc that's already been created. I know enough to understand that the TOC is based on the type of the header/style and indentation. My approach so far has been to auto-generate the TOC and then try (unsuccessfully) to fix the problems; perhaps this isn't the best approach in this situation. What's happening is that the TOC is missing half my sections and for others it's adding way too much detail. Again my sense is I have to "fix" individual section headings but I haven't been successful so far.

    Read the article

  • Nginx Tornado Combination Causing 502 Bad Gateway Errors

    - by PlaidFan
    We are facing a problem with inconsistent 502 errors and tracking down the reasons has been a very frustrating exercise. We can reproduce the problem by sending several simultaneous requests quickly. The problem is that several is only in the range of 10 to 20 within a 5 seconds (not a typo). So clearly this type of load should be handled easily. We really like the Nginx + Tornado approach but are considering going to a more traditional (e.g. threading) approach because this problem has been very difficult to solve. I was wondering if you a) know how to fix this issue and b) how we can tracked down the culprit(s). The log files simply identify there being a connection refused. We have the same problem as this post: How do I debug a HTTP 502 error? But there is no answer provided on how to solve the problem so I'm hoping you can help because this may be a common issue with this type of setup. Thanks in advance, Paul

    Read the article

  • How do you pick what server setup you need?

    - by ed209
    I recently started receiving pubsub data feed from etsy. It averages around 250 notifications per minute. But obviously, when the USA wakes up that spikes quite heavily. I want to be able to deal with those spikes (about 3 per day) but the rest of day is fine. What's the best method of getting the right server configuration. My current approach is to keep upgrading until the server stops dying... next leap is: Processor: AMD Phenom II X6-1055T HEXA Core RAM: 4GB DDR2 SDRAM HD1: SATA Drive (7,200 rpm) (+500 GB 7200 RPM SATA hard drive) HD2: SATA Backup Drive (+500 GB SATA (7,200 rpm)) OS: Linux OS (+CentOS 5 64-bit) Bandwidth: 6000GB Monthly Transfer (3000 in + 3000 out) (+100M uplink port) What's the best approach to working out what sort of server setup you need?

    Read the article

  • Is there any way to do "mail server parking"?

    - by percyboy
    I am managing a mail server, which will be temporarily closed for three or four days due to the data center maintenance. I want to find a solution to (completely or partly) solve the lost mails during this unavailable period. Because the data volume is huge, it is very hard to migrate it to other data center. One approach I think out is to setup a temporary mail server in other data center, and when new mail received, the mail server automatically sends a return mail to tell the sender "We are temporarily closed for three or four days. Please send the mail later or contact in other means." I am wondering is this approach possible with existed mail server ? Or something better available ? (free solution is preferred for it is only for temporary)

    Read the article

  • Using Dropbox API instead of a FTP server.

    - by Somebody still uses you MS-DOS
    This is a small aplication scenario. Usually, when you have to do some backups of source code/database on your server, you use a second ftp server, a cronjob to tar.gz your db dumps and source files, and send this file to your ftp server from your application server. Dropbox created an API to use it's infrastrucutre. Since they provide 2gb for free accounts, I thought about being able to upload to it instead of a ftp server. So, if you do some freelance work, you can create a free account for each client and use this approach, maybe encrypting the files you send. You even gain a revision for each sent file, like a revison control system, for free, from the last 30 days. What do you think of this approach? Is it possible? And, more importantly: what are the security risks involved? (That's why I'm asking this on serverfault, since this POV from sysadmins will be more accurate). Thanks!

    Read the article

  • How to configure LAMP server for iOS social/chat app?

    - by andufo
    I'm on the last developing phase of a social networking app for iOS that has a chat module. Right now I'm trying to figure out the best way to achieve these features: Send message instantly to another user. If other user is online, delivery should be instantly. If user reads the message, the remitent should be notified of that action. If a user visits my profile, I should be notified instantly. What would be, in your opinion, the best approach to achieve that experience? The server is CentOS 5.6. I've previously reviewed XMPP, sockets, but I'm still unsure on what the best approach is. Any opinions and resources will be much appreciated.

    Read the article

  • Using Dropbox API instead of a FTP server for backing up DB/Source in your application.

    - by Somebody still uses you MS-DOS
    This is a small aplication scenario. Usually, when you have to do some backups of source code/database on your server, you use a second ftp server, a cronjob to tar.gz your db dumps and source files, and send this file to your ftp server from your application server. Dropbox created an API to use it's infrastrucutre. Since they provide 2gb for free accounts, I thought about being able to upload to it instead of a ftp server. So, if you do some freelance work, you can create a free account for each client and use this approach, maybe encrypting the files you send. You even gain a revision for each sent file, like a revison control system, for free, from the last 30 days. What do you think of this approach? Is it possible? And, more importantly: what are the security risks involved? (That's why I'm asking this on serverfault, since this POV from sysadmins will be more accurate). Thanks!

    Read the article

  • Examples of temporal database designs? [closed]

    - by miku
    I'm researching various database design for historical record keeping - because I would like to implement a prototypical web application that (excessively) tracks changes made by users and let them undo things, see revisions, etc. I'd love use mercurial or git as backend (with files as records) - since they already implement the kind of append-only changes I imagine. I tried git and dulwich (python git API) - and it went ok - but I was concerned about performance; Bi-temporal database design lets you store a row along with time periods, when this record was valid. This sure sound more performant than to operate on files on disk (as in 1.) - but I had little luck finding real-world examples (e.g. from open source projects), that use this kind of design and are approachable enough to learn from them. Revisions à la MediaWiki revisions or an extra table for versions, as in Redmine. The problem here is, that DELETE would take the whole history with it. I looked at NoSQL solutions, too. With a document oriented approach, it would be simple to just store the whole history of an entity within the document itself - which would reduce design plus implementation time in contrast to a RDBMS approach. However, in this case I'm a bit concerned about ACID-properties, which would be important in the application. I'd like ask about experiences about real-world and pragmatic designs for temporal data.

    Read the article

  • Scoring/analysis of Subjective testing for skills assessment

    - by ChrisBint
    I am lucky in the sense that I have been given the opportunity to be a 'Technical Troubleshooter' for our offshore development team. While I am confident and capable of dealing with most issues, I have come across something that I am not. Based on initial discussions with various team members both on and offshore, a requirement for a 'repeatable, consistent' skills assessment has been identified. In my opinion, the best way to achieve this would be a combination of objective and subjective tests. The former normally being an initial online skills assessment on various subjects, for example General C#, WCF and MVC. The latter being a technical test where the candidate would need to solve various problems and (hopefully) explain the thought processes involved with the solution whilst doing so. Obviously, the first method is consistent, repeatable and extremely accurate. The second is always going to be subjective and based on the approach, the solution (or possibly not) and other factors. The 'scoring' of this is also going to be down to the experience and skills of the assessor and this is where my problem lies; The person that is expected to be the assessor initially (me) has no experience. The people that will ultimately continue this process for other people will never remain the same due to project constraints and internal reasons, this changes the baseline for comparison. I am not aware of any suitable system that can be classed as consistent and repeatable for subjective tests with the 2 factors above, let alone if those did not exist. So anyway, I have to present a plan that will ultimately generate a skills/gap analysis and it is unlikely that I will be able to use an objective method (budget constraints most likely reason). The only option left is the subjective methods and the issues above. Does anyone have any suggestions for an approach that may tick all the boxes?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >