Search Results

Search found 1013 results on 41 pages for 'recommendation'.

Page 29/41 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Which universal or driverless printing solution do you use/recommend?

    - by Matt
    I'm in need of a driverless printing solution for Microsoft Terminal Services 2003/2008. This is mainly to support clients who are connected through broadband into our hosted servers. We were hoping that MSTS 2008 thinprint would be the answer but unfortunately it performs poorly in the print area. The files are too large. I found the following slightly outdated URL: http://www.msterminalservices.org/software/Printing/ This lists a number of products but I have no experience with any of them. I'd like a product that works/easy to install (as our clients are remote and not particularly tech savvy) and ideally I just pay for the server license and not every clients. What is your experience/recommendation and tips you can offer me in regards to TS printing? thanks in advance.

    Read the article

  • Virtualized Development Server for simulating 3-Tier Environment

    - by chris.cyvas
    Hello, I am thinking about buying a new server based development box for development (redundantly redundant, I know ;)). Ideally, I want to run something like ESXi or Xen Hypervisor at the lowest level. Then I want to add (at least) 5 Linux VM's for the following uses: 2 Web Servers 2 Application Servers 1 Database Server I want to load balance the 2 web servers and the 2 application servers and (somewhat obviously) they need to be all networked together to simulate a production environment. Also, it used to be the case that the recommendation was to put each VM on it's own hard drive, but I'm not sure that holds water anymore. Any advice? Does anyone have any advice on how to pull this off? Gotchya's, LookOuts!, etc? Thanks!

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • NFS of NAS server blocks in cluster environment

    - by Zardoz
    In our department we have an Iomega NAS (px4-300d) connected to a Supermicro cluster with 5 nodes (12 cores per node). Each node mounts a share on that NAS by using NFS. Unfortunately after some time (several minutes) of permanent read/write operations (from all nodes) the NAS starts to block and a bit later freezes completely. We tried several options of the mount command, but nothing helped (async, intr, wsize, rsize). The NAS itself doesn't allow many options (better to say none). Do you have any recommendation how to integrate a NAS using NFS in a cluster environment?

    Read the article

  • What kind of time lapse image stitching software is out there?

    - by AaronLS
    I took a bunch of digital images(jpg format) of a sunrise that I want to make into a time lapse movie. The only recommendation I've seen was some iStopMotion for Mac, but I am running Windows. I would prefer the software to take into account the metadata in the image that indicates what time the image was taken when determining how long to display each frame, as they won't be accurately consistent in their temporal spacing. Onion skinning would be a cool feature to. Please, one software suggestion per answer to allow the voting system to do it's job. Thanks in advance.

    Read the article

  • Kubuntu 11.10 Lot of Networking problems

    - by Cobraone
    Since I upgrade to 11.10 I have a lot of problems with KDE. First of all there are problems in configuring a static IP address. Just to explain @home I have a normal fiber ADSL and I use a DHCP. When I go to a customer I must insert a static IP address. With ifconfig everything seems ok but there is something wrong in searching DNS names. (I have installed Ubuntu and was going ok again). Now I Have reinstalled again Kubuntu 11.10 and I have the same problem in addition today I have discovered that if I connect to a network in another customer office the desktop freezes and I could only switch between windows with alt+tab. No FN key or right click to open run command works. So i unplugged network (configuration is just DHCP here) and tried on another position in office. It was the same. My Laptop freezes when connected, a fedora 14 of a friend works. So I decided to connect my Galaxy S II as USB network device. Everything is ok for like 3 minutes. When I noticed a little loss of signal again the desktop freezes and i must work (like now) just switching between windows with alt+tab). Additional information: Unplugging network or restarting it via Konsole does not not solve the freezing problem. Every time I must open a console and reboot. Any idea of what tests to do ? Just a recommendation: If I must post here logs or something else please guide me. I use Linux since Ubuntu 9 but I am not an "expert".

    Read the article

  • Can't read from the source file or disk

    - by Wanna coffee
    I'm having a two WD external hard disk with capacity of 1 TB. I'm trying to copy SAP file(capacity - 250 GB ) in the extension of .vmdk from one hard disk to another hard disk. But when ever i'm trying to copy, at down to the line it showing me this error message. By default my both hard disk File System value is NFTS, even though it showing me an this error message. Is this problem with OS or Hard disk or Data which i'm taken into the action?? What might be the problem, Please give me your suggestions and recommendation. Awaiting for your reply.

    Read the article

  • Should `keepalive_timeout` be removed from Nginx config?

    - by Bryson
    Which is the better configuration/optimization: to explicitly limit the keepalive_timeout or to allow Nginx to kill keepalive connections on its own? I have seen two conflicting recommendations regarding the keepalive_timeout directive for Nginx. They are as follows: # How long to allow each connection to stay idle; longer values are better # for each individual client, particularly for SSL, but means that worker # connections are tied up longer. (Default: 65) keepalive_timeout 20; and # You should remove keepalive_timeout from your formula. # Nginx closes keepalive connections when the # worker_connections limit is reached. The Nginx documentation for keepalive_timeout makes no mention of the automatic killing, and I have only seen this recommendation once, but it intrigues me. This server serves exclusively TLS-secured connections, and all non-encrypted connections are immediately rerouted to the https:// version of the same URL.

    Read the article

  • Dealing with employees that use Hotspots? [closed]

    - by javasocute
    Recently a consultant revealed to me and my superiors that a particular department in my company had a tremendous decline in productivity over the past year. This department happened to mostly work with laptops and tablets and thus had wireless. Our barracuda did not display any vast amounts of internet entertainment and so I was initially puzzled by his statement. However he explained that more and more employees are using "hotspots" that when not broadcasted, become invisible to our network. So basically these people could connect to hotspots and watch youtube all day and we would never know it. The consultant also said that this was becoming an issue for many companies. So my question is, how can I detect invisible hotspots? I was browsing google and there are several different programs that can scan for such access points. But I am hoping to get a recommendation from the serverfault community. Thank you.

    Read the article

  • When using RAID10 + BBWC why is it better to separate PostgreSQL data files from OS and transaction logs than to keep them all on the same array?

    - by Vlad
    I've seen the advice everywhere (including here and here): keep your OS partition, DB data files and DB transaction logs on separate discs/arrays. The general recommendation is to use RAID1 for OS, RAID10 for data (or RAID5 if load is very read-biased) and RAID1 for transaction logs. However, considering that you will need at least 6 or 8 drives to build this setup, wouldn't a RAID10 over 6-8 drives with BBWC perform better? What if the drives are SSDs? I'm talking here about internal server drives, not SAN.

    Read the article

  • Monitoring remote employees with screenshots and screen sharing?

    - by Lulu
    I'm looking for a way to track and monitor my online employees I hire on Elance, oDesk and such. The tool should be able to: Produce screenshots on set intervals. Provide real-time screen sharing. Preferably track computer usage while user is logged in as "working" + state which task was done in that time. If I have no all-in-one solution, I will go with RealVNC for screen sharing. But I still need a recommendation for the other things. Thanks

    Read the article

  • Applying for job: how to showcase work done for (private) past clients?

    - by user33445566
    I want to apply for my first "real" (read: non-freelance) Ruby on Rails job. I've built several apps already. My best work (also the most logically complicated app) was for a freelance client, and I'd like to show it to potential employers. Only problem is: it isn't online anymore. And I've lost touch with the client. How can I include this work in my portfolio? About the app: It's a Facebook game. The client's business idea for this app was not the best. It was never going to make any money. I think it was kind of a vanity side project for him. The logo and graphics are nice-looking, though, and were designed by the client. I've actually spent a lot of time recently recoding most of the app, and adding a full test suite. I want to showcase the BDD / TDD skills I've acquired. I'm not very familiar with the etiquette (/law?) concerning this situation. Can I just put my new version of the app up at a free Heroku URL (perhaps with a "credits" section, where I credit the ideas and graphic designs to my former client)? NOTE: Again, this is just to show potential employers. I am not trying to market the app as my idea, or attract any users. Can I put some or all of the code on GitHub? What if I don't put the code up publicly, but merely send a tarball to potential employers? Do I need to ask permission from my former client (and what if he says no)? The last thing I want to do is get in any legal trouble, or offend people I'm trying get a job from. But I believe that my work and experience on this app are my highest recommendation for getting a job.

    Read the article

  • Any reasonable UPS for a Desktop PC, just to shut it down?

    - by Michael Stum
    While I do have a surge protector to protect against overvoltage (hopefully), I have nothing against undervoltage. When a lightning storm hits, I had the lights flickering at some point. The PC continued to run, but it got me thinking of getting a UPS as a way to a) have a clean 120V/60Hz power source and b) have a way to shut down the PC in case something bad happens. I heard not all UPS' protect against power spikes, so I wonder if someone has a recommendation? It does not need to keep the PC on for a long time if the power goes out, it's good enough if it shuts down the PC after 5 minutes or so. There are 2 PCs connected. One is a Core i7-860 with a Radeon 5870 running Windows 7 Ultimate (so quite power hungry. It uses a 600W PSU but I have no measurements of the actual usage), the other one is a Windoes Home Server, running WHS/Windows Server 2003. Any recommendations in the low-price segment?

    Read the article

  • Transaction log is full and does not free up space

    - by titanium
    Hi, I have a database in SQL Server 2005 whose transaction log becomes full. It is using snapshot replication. I noticed the transaction log is not freeing up space. So I created an additional transaction log. Three days has passed and this first transaction log is still full. I performed a full database backup and transaction backup. Then I tried to shrink the transaction log but the shrink failed. Can anyone advise why shrinking transaction log is failing? ANy other recommendation on how to resolve the problem?

    Read the article

  • Windows Load Balancing Services and File Shares

    - by cbkadel
    We are using Windows Load Balancing Services (WLBS). One of the things that I do notice, is that if I create a File Share on one of the physical hosts, I am able browse to that file share using the clustered-ip address. This might be a 'opinion' question, but I haven't been able to find much literature on file shares in particular with wlbs. Is this a recommendation configuration? Are there any limitations? What about when the share contains different sets of content on both hosts? For instance: Three 'hostnames' - host1 (physical1), host2 (physical2), and cluster. I create the following shares: \physical1\myshare \physical2\myshare What I notice is that i can see: \cluster\myshare I'm guessing that this is read-only, and that there's no file synchronization. But what happens if they are in fact out of sync, what would a network browser see then? Thanks for your time!

    Read the article

  • Which motherboard for Intel i7 and how to get RAID working?

    - by jasondavis
    I am wanting to build a new PC, I have a couple questions. 1) I am wanting to go with the Intel Core i7 920 Processor, can anyone reccomend a good reliable motherboard for this processor? Graphics card support does not matter (sli-crossfire). I would like to support a lot of ram, so the more ram slots the better. I have read so many bad reviews about certain boards not working good, I would love recommendation from experience. 2) I am wanting to run a couple SSD drives in RAID-0, I have never done this, will I need to purchase anything additional to the MB and CPU and drives to get raid working?

    Read the article

  • which technology or strategy a new / inexperienced freelancer should use to earn profit? [closed]

    - by w3softdev
    this question has re-posted by me in the following group if you find suitable to answer this question then please click on the link attached here or copy paste this in your browser.. http://answers.onstartups.com/questions/32767/which-technology-or-strategy-a-new-inexperienced-freelancer-should-use-to-earn Well it is my very first Question in this section and i really don't know whether my query relate to this section or not. anyway i have some awkward query. (however, it is like a bit story but i guess it is necessary to know some background knowledge of me.) Actually I m fresh recent grad who has just started his freelancing work. In due course i have got a project to develop website for a middle scale business (travel agent). As I don't trust on my client whether he will pay to me in full or not after the completion of website, i want to use cheaper and efficient technology so that how much he would pay I could got at least few units of % of profit. As i have learnt ASP.NET and when I inquired about the expense in Hosting of my website i got the recommendation to develop my web app using the combination PHP and MYSQL instead the asp.net + ms sql. And the problem is I don't know PHP. should I learn PHP and or work in what i m comfortable with and should try to cover whole deserved money. (as it is my first project so i m also advised that i may got some loss in starting but contrary to this i don't want to go in loss and want to earn appropriate profit)

    Read the article

  • Hardware recommendations for building an Ubuntu encrypted file server

    - by Robert Mashlan
    I would like to build a file server for my home network using Ubuntu. It will serve files from RAID1 configured disks, either in the OS or in hardware. It will be connected to a Gigabit ethernet LAN. The disks will use an encrypted file system. It will serve samba shares. I would like a recommendation on what kind of processing power/memory I would need to build a box that would be able to sustain the full capacity of the Gigabit ethernet connection in a file transfer for a single connection with the overhead of serving from an encrypted disk. I'm not looking to build a dream server, I just want enough processing capacity for high performance (and reliable) file sharing and spend as little as possible for it. This may be tangential, but what kind of hardware would I need to have a server be able to reliably go into a low power mode when no requests are being made of it?

    Read the article

  • Relating ping to perceived browser GUI response

    - by cvsdave
    We periodically get complaints of poor GUI (browser page) response that we need to explore. I am looking for a quick and cheap first check to see if the issue is network latency, or server performance. Has anyone encountered any discussion of ping time and perceived GUI response? I understand that GUI response is complicated, but it would be nice if we could find or develop a rule of thumb along the lines of "Hmmmm, ping is over 200, it might be network problems". Ideally, this lives in a script on the user's machine so that we can see the latency that they are seeing... (BASH, Linux). A reference to a good discussion page would be a fine answer, as would any recommendation of other source material.

    Read the article

  • Enable group policy for everything but the SBS?

    - by Jerry Dodge
    I have created a new group policy to disable IPv6 on all machines. There is only the one default OU, no special configuration. However, this policy shall not apply to the SBS its self (nor the other DC at another location on a different subnet) because those machines do depend on IPv6. All the rest do not. I did see a recommendation to create a new OU and put that machine under it, but many other comments say that is extremely messy and not recommended - makes it high maintenance when it comes to changing other group policies. How can I apply this single group policy to every machine except for the domain controllers? PS - Yes, I understand IPv6 will soon be the new standard, but until then, we have no intention to implement it, and it in fact is causing us many issues when enabled.

    Read the article

  • Creating dynamic plots

    - by geoff92
    I'm completely new to web programming but I do have a programming background. I'd like to create a site that allows users to visit, enter some specifications into a form, submit the form, and then receive a graph. I have a few questions about this, only because I'm pretty ignorant: Is there a good framework I should start in? I know a lot of java, I'm okay with python, and I learned Ruby in the past. I figure I might use ruby on rails only because I hear of it so often and I think I've also heard it's easy. If anyone has some other recommendation, please suggest. The user will be entering data into a form. I'm guessing the request they'll be making should be one of a GET request, right? Because I don't intend for any of the data they're entering to modify my server (in fact, I don't intend on having a database). The data the user inputs will be used to perform calculations involving lots of matrices. I've written this functionality in python. If I use ruby on rails, should it be instead written in Ruby? Somewhere I've heard that you can either place the load of the work on your server or on the client's computer. Since the code performs heavy math, which option is preferable? How do I alter the setup to either make the client do the work or my server? Should I be using a "cgi-bin"? In the code that I have now, I use matplotlib, a python library, and then "show" the plot in order to see the graph. I specify the x and y limits, but I am able to "drag" the graph in order to see more data within the plot window. Ultimately, I want a graph to be shown on my site with the drag functionality. Is this possible? What if the client drags the graph so far, more computations must be made? Thanks!

    Read the article

  • Is it better to return NULL or empty values from functions/methods where the return value is not present?

    - by P B
    I am looking for a recommendation here. I am struggling with whether it is better to return NULL or an empty value from a method when the return value is not present or cannot be determined. Take the following two methods as an examples: string ReverseString(string stringToReverse) // takes a string and reverses it. Person FindPerson(int personID) // finds a Person with a matching personID. In ReverseString(), I would say return an empty string because the return type is string, so the caller is expecting that. Also, this way, the caller would not have to check to see if a NULL was returned. In FindPerson(), returning NULL seems like a better fit. Regardless of whether or not NULL or an empty Person Object (new Person()) is returned the caller is going to have to check to see if the Person Object is NULL or empty before doing anything to it (like calling UpdateName()). So why not just return NULL here and then the caller only has to check for NULL. Does anyone else struggle with this? Any help or insight is appreciated.

    Read the article

  • Cookie Settings Storage Method

    - by Paul
    I've got an web app that needs to store some non-sensitive preferences for the user. Right now I'm storing their language preference and what mode they want a window opened in by default in two cookies: "lang" can be "en" or "de" "mode" can be "design" or "view" I might add a few more in the future. I'm not sure how many, but probably never more than a dozen. Language is parsed on every request, whereas the mode cookie is only used occasionally. I saw a recommendation that made sense I shouldn't try to do what I was originally planning to do and strongly type a user settings class deserialized on each request because of the overhead involved. I see three options here and I'm not sure which is the best overall. Keep things as they are, add a new cookie for each new setting Combine the cookies into a single settings cookie and add future values to it Change the mode cookie to settings (leaving language alone), add new user settings values to the settings cookie All would work obviously. I'm leaning toward option three, but I'm not sure if there's a best practice for this?

    Read the article

  • Good free way to clone a hard drive or a partition and send the image over the network (through FTP, Windows file sharing, "anything")?

    - by Deleted
    What I ideally would like is a free software solution which can: Boot from a CD/DVD/USB-stick and Clone a complete hard drive or a partition and Send the resulting image file over the network through Windows file sharing (SMB, I could use SAMBA on my server to receive the image) or through FTP or through SFTP or through SCP It should work with Linux and Windows file-systems (where specific file system support is necessary) Is there anything good out there like this? I know Wikipedia lists a lot of cloning software. But I'm looking for a personal recommendation which you have used yourself, as I find it more credible (I'll see from the upvotes if the answer is liked by a lot of visitors).

    Read the article

  • SharePoint 2007 Object Model: How can I make a new site collection, move the original main site to b

    - by program247365
    Here's my current setup: one site collection on a SharePoint 2007 (MOSS Enterprise) box (32 GB total in size) one main site with many subsites (mostly created from the team site template, if that matters) that is part of the one site collection on the box What I'm trying to do*: *If there is a better order, or method for the following, I'm open to changing it Create a new site collection, with a main default site, on same SP instance (this is done, easy to do in SP Object Model) Move rootweb (a) to be a subsite in the new location, under the main site Current structure: rootweb (a) \ many sub sites (sub a) What new structure should look like: newrootweb(b) \ oldrootweb (a) \ old many sub sites (sub a) Here's my code for step #2: Notes: * SPImport in the object model under SharePoint.Administration, is what is being used here * This code currently errors out with "Object reference not an instance of an object", when it fires the error event handler using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Deployment; public static bool FullImport(string baseFilename, bool CommandLineVerbose, bool bfileCompression, string fileLocation, bool HaltOnNonfatalError, bool HaltOnWarning, bool IgnoreWebParts, string LogFilePath, string destinationUrl) { #region my try at import string message = string.Empty; bool bSuccess = false; try { SPImportSettings settings = new SPImportSettings(); settings.BaseFileName = baseFilename; settings.CommandLineVerbose = CommandLineVerbose; settings.FileCompression = bfileCompression; settings.FileLocation = fileLocation; settings.HaltOnNonfatalError = HaltOnNonfatalError; settings.HaltOnWarning = HaltOnWarning; settings.IgnoreWebParts = IgnoreWebParts; settings.IncludeSecurity = SPIncludeSecurity.All; settings.LogFilePath = fileLocation; settings.WebUrl = destinationUrl; settings.SuppressAfterEvents = true; settings.UpdateVersions = SPUpdateVersions.Append; settings.UserInfoDateTime = SPImportUserInfoDateTimeOption.ImportAll; SPImport import = new SPImport(settings); import.Started += delegate(System.Object o, SPDeploymentEventArgs e) { //started message = "Current Status: " + e.Status.ToString() + " " + e.ObjectsProcessed.ToString() + " of " + e.ObjectsTotal + " objects processed thus far."; message = e.Status.ToString(); }; import.Completed += delegate(System.Object o, SPDeploymentEventArgs e) { //done message = "Current Status: " + e.Status.ToString() + " " + e.ObjectsProcessed.ToString() + " of " + e.ObjectsTotal + " objects processed."; }; import.Error += delegate(System.Object o, SPDeploymentErrorEventArgs e) { //broken message = "Error Message: " + e.ErrorMessage.ToString() + " Error Type: " + e.ErrorType + " Error Recommendation: " + e.Recommendation + " Deployment Object: " + e.DeploymentObject.ToString(); System.Console.WriteLine("Error"); }; import.ProgressUpdated += delegate(System.Object o, SPDeploymentEventArgs e) { //something happened message = "Current Status: " + e.Status.ToString() + " " + e.ObjectsProcessed.ToString() + " of " + e.ObjectsTotal + " objects processed thus far."; }; import.Run(); bSuccess = true; } catch (Exception ex) { bSuccess = false; message = string.Format("Error: The site collection '{0}' could not be imported. The message was '{1}'. And the stacktrace was '{2}'", destinationUrl, ex.Message, ex.StackTrace); } #endregion return bSuccess; } Here is the code calling the above method: [TestMethod] public void MOSS07_ObjectModel_ImportSiteCollection() { bool bSuccess = ObjectModelManager.MOSS07.Deployment.SiteCollection.FullImport("SiteCollBAckup.cmp", true, true, @"C:\SPBACKUP\SPExports", false, false, false, @"C:\SPBACKUP\SPExports", "http://spinstancename/TestImport"); Assert.IsTrue(bSuccess); }

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >