Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 492/811 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • Is it safe/wise to run Drupal alongside bespoke business web apps in production?

    - by Vaze
    I'm interested to know the general community feeling about the safety of running Drupal alongside bespoke, business critial ASP.NET MVC apps on a production server. Previously my employer's Drupal based 'visitor website' was hosted as a managed service with a 3rd party. While the LoB sites were hosted in-house. That 3rd party is no longer available so I'm considering my options: Bring Drupal in-house Find another 3rd party My concern is that I have little experience with Drupal administration (and no experience securing it) and that the addition of PHP to my IIS server poses a security risk. Is there a best practice that I can follow in this situation?

    Read the article

  • Dual Monitors don't turn off/sleep when they should

    - by Mario
    Details: Dual 25" monitors connected to HP Envy via HDMI and display port via DVI adapter. Power scheme is set up in high performance (Dim Display: Never - Turn Off Display: 15 mins - Computer Sleep: Never) Screensaver is set up to kick in after 10 mins of idle (which happens) 5 minutes later, the screensaver stops. The "Monitor Going to Sleep" notice appears on screen and monitors go to sleep briefly. All is well thus far. Suddenly, the Windows 7 alert sound when a device is unplugged is heard. Monitors then turn back on. Screens are black. Only the mouse cursor is displayed. Backlighting is back on. This only started happening after I obtained and connected the second 25" monitor a few days ago. However, I had a 24" in its place before, and this wasn't happening. Why is this happening and how do I correct this behavior? Thanks in advance.

    Read the article

  • If I double my ram on a x86 processor, does that double the ram I can use for each individual process?

    - by Derek Reitz
    I don't understand how 32-Bit OS's use RAM on a per process basis. I've read the max RAM my x86 processor running a 32-bit OS can use is 2^32 = 4gb; but that's just for one process, right? 3DS Max keeps crashing, but it typically can never use more than 2GB of RAM before it crashes, if I increase my RAM from 4-8GB, would that double how much RAM I can use for each individual process or actually cause no change in my performance? Also would increasing my VRAM and getting a better graphics card increase the extent to which individual programs can preform? Lastly, is there any way to upgrade a 86-bit processor to be able to run a 64-bit OS? I feel like it would be ridiculous to sell modern processors that are capped at 4GB of RAM? Thanks. Quad-Core Intel i7 Q 720 @ 1.6GHz

    Read the article

  • Virtualbox, slow upload speed using nat

    - by user1622094
    Im running Virtualbox on a Ubuntu 12.04 server (host) and I'm running a Windows 7 as guest os. Im using the (virtual) Intel PRO/1000 MT network card. I get good network performance for download using both nat and bridged network settings but upload speed is really slow using nat. I have tied this on tow different servers, one brand new, and one a several years old, both gave the same result. If you can explain this behavior or have ideas of further test I can perform please let me know.

    Read the article

  • Should we install the OS on an SSD or not when running virtual machines?

    - by Raghu Dodda
    I have a new Dell Mobile Precision M6500 laptop with 8 GB RAM. it has two hard drives - 500 GB @7200 RPM and a 128 GB SSD. The main purpose of these laptop is software development in virtual machines. The plan is to install the base OS (Windows 7) and all the programs in the 500 GB drive, and let the SSD only contain the virtual machine images. It is my understanding that the we get most performance from the virtual machines if the images are on a separate hard drive than the base OS. Is this the way to go, or should I install the OS on the SSD as well? What are the pros and cons? The virtual machine images would be between 20 - 30 GB, and I might run 1 or 2 at a time.

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • Multiple iSCSI Targets or 1 that's shared?

    - by Joost Verdaasdonk
    On my network I have several types of files I want to save on a SAN like: SQL db's and logs Exchange data Random files Now I'm wondering if I should create one iSCSI Target with a large volume and initiate that from one of the servers. (and share it so other servers can use it too) Or I should create separate Targets to have each server use its own storage. For the record the storage could be separated because the servers aren't using the shared data. For one reason I was thinking of one storage is ease of backup. (but perhaps performance could be a problem?) What would be an advisable configuration for these type of data?

    Read the article

  • Type 1 Hypervisor on the desktop

    - by Blazemore
    I have a powerful home PC, and I've used VirtualBox to run Linux distros in Windows (and vice versa). I'm interested in trying out a lightweight type 1 hypervisor to run all my operating systems (Windows 7, Debian, Arch) and was looking for suggestions of which to pick and how to implement this. From what I gather, a type 1 hypervisor is a lightweight OS which simply provides VM management functionality. Will I get reasonable performance under each guest OS? Can all the guest OSs have access to a shared data drive, or is is best to have a storage server in another guest OS and mount it over the virtual network? What about gaming, is this feasible, or will I realistically need to run Win7 on bare metal? I'd appreciate any input.

    Read the article

  • Good Choice of Memory for Asus K52F-BBR5

    - by Christopher Painter
    I recently purchased an Asus K52F-BBR5 notebook. It's a basic laptop with an Intel P6100 CPU and Mobile Intel® HM55 Express Chipset. It came with 3GB of DDR3 SODIMM memory and I'd like to expand it to 8GB. I'm a little confused by DDR3 nomenclature and not up to date on my knowledge of chipsets. I'd like to make a good choice when selecting memory for it. Crucial's database suggests using either a PC3-8500 with CAS 7 or a PC3-10600 with a CAS of 9. Is the 8500 better because of it's CAS 7 or will my chipset run the memory async at a higher speed and get better performance? Which would be a better choice for my chipset and CPU? Price difference is negligble.

    Read the article

  • Video streaming and internet browsing on different bands/frequency

    - by user47207
    I have a Netgear WDNR37000 which allows clients on a 2ghz or 5ghz to access the internet and see every client and device on the network. I have a computer with two nics, one that is in the 2ghz range and the other on the 5ghz range. My specific problem is that I would like to serve my video streams (hulu, ps3mediaserver, playon) to my ps3 on the 5ghz band while internet browsing is routed to the 2ghz band. This is so that the video streams aren't affected by general internet use. While the easiest solution would be to disable internet access on the 5ghz apn, I would like to know of a solution that would not require that.

    Read the article

  • Minimize writes to SSD disks with Windows 7

    - by mark
    Most people use their SSD as their primary system installation disk with Windows 7. W7 already has a lot of optimizations for SSDs, both in terms of performance and lifetime. Minimizing writes increases the lifetime of SSDs, so post each suggestion as an answer and let others vote on them. Update: I'm not sure anymore that minimizing writes is a good thing [tm], hard facts that SSDs will degrade within a noticeable time are missing and it seems this it can create a bit FUD about the functionality of the SSD. In other words: I question the usefulness of my wiki question.

    Read the article

  • Is there a way to catch cmd.exe windows into tabs?

    - by user55542
    I use an editor that allows me to type in a command to run. In order to see the output without having to redirect it to a file, I precede the command with "cmd /k," which leaves an open cmd.exe window. So I'd like to find a way to move catch the call to cmd.exe and give it to an application that tabifies cmd.exe, a terminal emulator as it's called. The desired result would be similar to what happens in a tabbed editor, when that editor is to open some file, it does so in another tab, and not in another window. While in a given situation it may be easier to modify the command to redirect output to display in the editor itself, in general it would be more helpful if I could find a way to catch all such calls into one window.

    Read the article

  • JBoss 4.2.3 Won't Start

    - by Thody
    Hi, I'm trying to start a new installation of JBoss 4.2.3, and it's getting as far as "INFO [Server] Core system initialized", then hanging for several minutes. There is a Java process running, but only at ~35%. Also, looking at the boot.log, there are no entries after ~1s after starting the boot. Any ideas what might be up? Update: After about 10 minutes, I got a handful of garbage collection warnings: GC Warning: Repeated allocation of very large block (appr. size 512000): May lead to memory leak and poor performance.

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • ZFS on top of iSCSI

    - by Solipsism
    I'm planning on building out a file server using ZFS and BSD, and I was hoping to make it more expandable by attaching drives stored in other machines in the same rack via iSCSI (e.g., one machine is running ZFS, and others have iSCSI targets available to be connected to by the ZFS box and added to zpools). Looking for other people who have tried this has pretty much lead me to resources about exposing iSCSI shares on top of ZFS, but nothing about the reverse. Primarily I have the following questions: Is iSCSI over gigabit ethernet fast enough for this purpose, or would I have to switch to 10GbE to get decent performance? What would happen when one of the machines running iSCSI targets disconnects from the network? Is there a better way to do this that I just am not clever enough to have realized? Thanks for any help.

    Read the article

  • Recommendations for SSD for server and database use?

    - by Tony_Henrich
    SSDs are a new technology and they are constantly improving. A lot of the posts here were posted in 2009 when SSDs where less mature and not as fast. What was recommend back then is probably out of date today because of better options. The SSD is used to hold SQL Server databases. Size is probably 128G. The database is used with a CMS and web server so web pages need to get their data and render as fast as possible. Which modern SSD is recommended for such a use? Is there an SSD better than Intel X-25 E/M in terms of performance/cost? (I am also evaluating cost between : RAM + UPS (semi persistent) vs SSD for same amount of gigabytes. No RAID is involved)

    Read the article

  • Web Server Setup

    - by gustyaquino
    Hello, In my workplace, we want to implement your own web server for at leat 100 Apache/PHP/MySQL web pages. My boss is opposed to hiring skilled personnel, he think we can do ourselves. Currently, we are working with hostgator reseller account. I chose CentOS as the operating system, but I don't know the best hardware solution. HP, Dell ? What about the setup on these platforms? Thanks. PS: sorry for my bad english Edit: The purpose of this migration isn't related to performance issues. But independence.

    Read the article

  • An international mobile app - Should I set up EC2 instances in multiple regions?

    - by ashiina
    I am currently trying to launch an mobile app for users around the world. It is not a spectacular launch which will get millions of users in weeks - just another individual developer releasing an app. I know enough about the techniques of managing timezones, internationalizing string, and what not ( the application layer ). But I cannot find any information on how I should manage my EC2 instances... Should I be setting up EC2 instances in different regions around the world? Is that a must-do, or is it an overkill? I'm aware that it's the ideal solution in terms of performance, but it becomes very tough managing servers in multiple regions. DB issues, AMI management, etc... I'd much rather NOT do so. So I would like to know the general best practice when launching an international app/website. Note: For static contents, I know it's better to use a CDN, so I'm planning on doing so.

    Read the article

  • Are Virtual-Desktop Managers good or bad for system resources?

    - by jasondavis
    I am looking at Virtual-Desktop Managers for Windows 7. Right now it seems that VirtualWin is supposed to be about the best one available for use on Windows. I have never used anything like this though and I am just curious from others experience and knowledge, does something like this hog up a lot of system resources? I do not NEED it but it is a nice feature to have when I do want to use it, my PC's performance is more important then using it. So is virtual esktop managers a resource hog or probably not? Please share any tips/advice/ or comments on them, thank you =)

    Read the article

  • Using git with cgit for decentralized/centralized development

    - by polemon
    I plan to use git for hosting my projects on my server. I've read about cgit, git-daemon, and I more or less decided to use those tools. But general use is still kind of confusing for me. What do I need to set up on the server, to push my files onto it. And when the files on the server are newer as the files on my computer, how do I merge them? Also, I use, say, two computers where I develop. How do I merge from one computer to the other? Also, when two people are working on the same project, how do they merge their local repos from one another? As you probably can tell by now, I come from SVN, but I've worked with Mercurial and now I'd like to test git.

    Read the article

  • SSL on Apache seems to significantly affect WebDAV performace

    - by takesides
    I'm using Apache 2.2 running on Windows Server 2008 R2 as a WebDAV server for clients to upload large media files (roughly 100-2000MB). I am finding that when I have SSL enabled (openSSL 0.9.8o) and use HTTPS for the uploads the throughput is around 13Mbps but when I disable it and just use HTTP I get around 80Mbps. I can't understand why this is happening as my understanding was that the heavy SSL work was done at the beginning of the connection. Does anyone have any idea why the performance is so drastically affected by enabling SSL? Cheers.

    Read the article

  • How to delete previous revisions with svn?

    - by apache
    I want to clear all all previous revisions and leave only the current revision. Is there a way to do this? I don't find a possible command to do this: [secret@vps303 ~]# svnadmin --help general usage: svnadmin SUBCOMMAND REPOS_PATH [ARGS & OPTIONS ...] Type 'svnadmin help <subcommand>' for help on a specific subcommand. Type 'svnadmin --version' to see the program version and FS modules. Available subcommands: crashtest create deltify dump help (?, h) hotcopy list-dblogs list-unused-dblogs load lslocks lstxns pack recover rmlocks rmtxns setlog setrevprop setuuid upgrade verify

    Read the article

  • Can NFS be forced to refresh stale files/directories when not using noac on the mount?

    - by johnnycrash
    We mount without using noac. I have a file that I append to once every 20 minutes. Then it will be read with mmap about 5,000 times a minute. We only mmap a couple blocks for each read. Needless to say, noac just kills the access performance, so we don't use it. I add data to the end of the file using a mount with noac and read from a mount without noac. The mounts that are reading are not seeing the new data. I want to know if there is a function I can call from c to refresh the attributes of a path and all its files. EDIT: I should add we cannot mount and unmount since there are 16 servers running on each system and they are constantly accessing the files. Well...maybe we could mount and unmount if each server used their own mount. I'd like to avoid that if possible. thanks!

    Read the article

  • What TLDs should I use for my NS records for redundancy? (DNSSEC support required)

    - by makerofthings7
    Question As a general practice, is it a good idea to use multiple TLDs for the name servers? How should I choose between which TLD would be a good candidate for being the root server for my NS name? More Info I am switching over 800 DNS zones to an outsourced DNS provider. I originally planned on setting the zone names to nsX.company.com, but think it would be best to have multiple TLDs such as .net , .org and .info Since I plan on supporting DNSSec at company.com I think all the 1st tier Name servers must support it as well. Part of the inspiration for this question came from our provider UltraDNS. In their configuration screen for our domains, they actively verify and alert us if our name servers aren't exactly: pdns1.ultradns.net pdns2.ultradns.net pdns3.ultradns.org pdns4.ultradns.org pdns5.ultradna.info pdns6.ultradns.co.uk

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >