Search Results

Search found 1341 results on 54 pages for 'factor mystic'.

Page 27/54 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • What is the fastest RAID in practise?

    - by Luke
    I'm going to be rebuilding my server, and I want much faster access to my data. I've used RAID 1 and 0 in the past, and decided upon RAID 10 (dedicated RAID card). Then someone told me to use RAID 5+0, then someone else told me to use RAID 6+0. Assuming the Hardware RAID Card supports each level, what is currently the FASTEST RAID available, given x number of hard drives? Reliability is now another factor, and I am willing to spend money on new drives if a drive (or multiple) fail. I simply want to know what the fastest RAID level is, along with some reliability for recovering from a failure

    Read the article

  • Excel 2010: How to color the area between charts?

    - by Quasdunk
    Hello, I asked this question already on stackoverflow but it hasn't been answered yet. Instead I was advised to try it here, so here I go :) So there's that simple XY-Line-Chart in Excel (2010). It is surrounded by two other graphs which are parallel but offset by the same factor in both the positive and negative direction - something like this: ---------------- (positively offset parallel graph) ---------------- (main graph) ---------------- (negatively offset parallel graph) Now I want to color the space between the main graph and the offset ones, but I just can't seem to find a way! Is it maybe possible with VBA? Or is there maybe a solution for Excel 2007?

    Read the article

  • HTTP transfer speeds start fast then slows to a crawl

    - by AnITAdmin
    We just got a new dedicated 1 gigabit server running IIS. The CPU is 15% or less, the RAM (4 GB total) has 3 GB unused... We are pushing 110 mbits per second... Speeds are really slow.. And, if fact, here's how it happens: We connect, and then the speeds are really fast, and quickly decline to 40 kBps or less. What's going on? It seems the server just wont go above 120 mbits per second. The files are all very large. 50 MB to 500 MB... Could this be a factor? Again, CPU, RAM, UI responsiveness when accessing remotely all seem fine.

    Read the article

  • Suggestions for hosted file sharing services

    - by Jon
    Before I pose my question, I will give some insight as per my scenario: I work for a small business (cost is an important factor) Our bandwidth is limited and would not support an in-house FTP server We need to share files (mostly pdf, inDesign, Illustrator documents) to our clients, and as we expand, we are finding that our current locally-hosted FTP solution is too slow and is becoming a detriment to our sales team. What we need is a remotely hosted solution to share files with our clients, specifically with the following features: Greater than 100gb of secure storage The Ability to distribute unique log in credentials to clients, granting access to a personalized directory or folder, while limiting access to other files on the server. A relatively simple web-based UI for clients with limited computer knowledge We have considered a dedicated remote server, and web-based services (box.net, yousendit.com, onehub.com, filesanywhere.com) but I am unsure as per the direction we should be taking - have I left another solution out? What would you suggest? Thanks in advance.

    Read the article

  • Is 50% download speed on a wireless G network normal?

    - by Bartlomiej Skwira
    I have a wired connection of about 36Mb/s, but my wireless speed is max at about 18-19Mb/s. I have a WRT54G-TM (T-Mobile, 802.11G) router with DD-WRT firmware - I've upgraded it to latest build. Done some settings changes: changed channel - 13 wireless network mode - G-only ACK Timing - 0 Fragmentation Threshold and RTS Threshold - 2304 Basic Rate - All Signal/Noise ratio: -46/-94, signal quality ~50-60%. Is this normal with G networks? Edit: The AP is located about 2 meters from laptop, no walls or metal objects, but its next to a TV. I've done a channel scan (had problems locating it, go to "Status - Wireless - Site survey" - lame naming) and everybody else is on channels 1 and 6. Switched to channel 11 but it didn't help. As for trasmit power I got best results with default 71mw. The antenna might be a factor, I'm using the default 2 antennas.

    Read the article

  • Dynamically changing one-node Cassandra cluster to two nodes

    - by Jason Axelson
    So I have an application that will be very dormant most of the time but will need high-bursting a few days out of the month. Since we are deploying on EC2 I would like to keep only one Cassandra server up most of the time and then on burst days I want to bring one more server up (with more RAM and CPU than the first) to help serve the load. What is the best way to do this? Should I take a different approach? Some notes about what I plan to do: Bring the node up and repair it immediately After the burst time is over decommission the powerful node Use the always-on server as the seed node My main question is how to get the nodes to share all the data since I want a replication factor of 2 (so both nodes have all the data) but that won't work while there is only one server. Should I bring up 2 extra servers instead of just one?

    Read the article

  • Putting two physical hard drives in a single 2.5" bay?

    - by dgw
    My crazy brain came up with this ridiculous idea. "Why can't I have a single 2.5" drive device that actually contains two independent hard drives? I want RAID 1 data mirroring and the security of having redundant drives. This is a thing that must exist!" To be clear, I am not asking if I can somehow shoehorn two 2.5" drives into a single bay, replace an optical drive with an additional HDD, or anything like that. What I have in mind is a single 2.5"-form-factor device that houses two independent hard drives, with separate housings (likely) and distinct controllers. Probably all they'd need to share is the power & SATA connections. Probably no such thing exists, but because I know there exist hard drives the physical size of a stack of postage stamps (roughly), I have to ask.

    Read the article

  • Open table cache in MySQL

    - by vvanscherpenseel
    I have my open table cache set to 1800 and I have a total of 1112 tables. MySQL Tuning Primer reports that 100% of my table cache is used yet my table cache hit rate is 5%. I understand that this happens due to concurrent connections all opening tables. I think I should raise the cache limit. I understand that the cache size is limited by the file descriptor limit of my operating system, but are there any other practical limitations I should be aware of? Searching Google or this very website yields mostly posts explaining the connection-factor or come up with indecisive answers. My question: can I safely increase the open table cache limit? Is there a maximum?

    Read the article

  • Openvpn client-to-client connection reencrypted at Server?

    - by user1684411
    currently i'm using a site-2-site openvpn setup. The routers en/decrypt all traffic that goes from one net to another. One of them is the Openvpn server. This works but performance is not as good as possible. I think the limiting factor is the cpu power of the router. Would it be better if i use client-to-client connections and access the fileserver in the one net from a pc in the other, because the openvpn-server does not have to decrypt the (whole) packets?

    Read the article

  • How can I disable write protection in my USB flash drive?

    - by 97847658
    My USB flash drive is currently unusable because it somehow (quite suddenly!) became write protected. I have googled around and tried many solutions to this problem, but none of them have worked so far. Here are some of the solutions I've tried: The drive has no tangible switch or button. Formatting the drive won't work, even in command line, even "low level formatting", because the drive is (after all) write protected. Changing certain registry keys to 0 doesn't seem to work. Repair_Neo2.9.exe says "USB Flash Disk not found!" One factor that may make it more difficult to find a solution: I have no idea what the make or model is, because I received the USB flash drive from my university as a gift. So if anyone knows how to find the make and model, that alone might be helpful. Any ideas? Thanks.

    Read the article

  • Why does the heat production increase as the clockrate of a CPU increases?

    - by Nils
    This is probably a bit off-topic, but the whole multi-core debate got me thinking. It's much easier to produce two cores (in one package) then speeding up one core by a factor of two. Why exactly is this? I googled a bit, but found mostly very imprecise answers from over clocking boards which do not explain the underlying Physics. The voltage seems to have the most impact (quadratic), but do I need to run a CPU at higher voltage if I want a faster clock rate? Also I like to know why exactly (and how much) heat a semiconductor circuit produces when it runs at a certain clock speed.

    Read the article

  • Recommendation for hardware upgrade: thin clients? Or...?

    - by Alex C.
    I work for an animal shelter in Upstate New York. We have about 50 machines running XP Pro. They're connected to a Windows network with a domain. About half of these computers are used for nothing more than using two web-based apps -- one to keep track of our animals, the other to process credit cards. Having a full-blown desktop PC seems like overkill for this purpose. The PCs are three-to-five years old, and I'd like to come up with a plan to upgrade the hardware. Our donations are down (not surprising, given the economy), so cost is a big factor. Can people recommend some options? Some sort of thin client, maybe?

    Read the article

  • Does multi-platter hard-drive use all of their heads to read simultaneously?

    - by WiSaGaN
    Suppose we have a harddisk with 2 platters with characteristics below: Rotational rate: 10, 000 RPM Avg sectors/track: 1000 Surfaces: 4 Sector size: 512 bytes I was reading "Computer Systems: A Programmer's Perspective 2ed" when I found that it calculates transfer time as if it only uses ONE head to read a sector. If that's the case, why not use 4 heads to write(read) on 4 surfaces? So when I write a 2K bytes file, each head should only need to wait for the platters to rotate just one sector length instead of 4, thus reducing the transfer time by a factor of 4. Or even redesign sector to make each sector on one cylinder but on 4 tracks residing same position respectively on 4 surfaces. Each one of (512/4) bytes. So when the hd needs to read a sector of 512 bytes, we only need the disk to rotate roughly 1/4 compare to original time. The idea looks like RAID 0.

    Read the article

  • Can time skew on Windows be reduced to +/- 5ms?

    - by mbac32768
    A number of our Windows workstations, running ntpd, simply cannot keep time. Our Linux workstations and servers running the same ntpd config don't have this problem, they can stay within +/- 5ms of skew. The Windows hosts easily drift to seconds and sometimes minutes apart. This is a problem for us. The only common factor we have been able to isolate is that the hosts that can't keep time are running Windows. Is there something fundamentally impossible with what we're trying to do?

    Read the article

  • What version-control system is most trivial to set up and use for toy projects?

    - by Norman Ramsey
    I teach the third required intro course in a CS department. One of my homework assignments asks students to speed up code they have written for a previous assignment. Factor-of-ten speedups are routine; factors of 100 or 1000 are not unheard of. (For a factor of 1000 speedup you have to have made rookie mistakes with malloc().) Programs are improved by a sequence is small changes. I ask students to record and describe each change and the resulting improvement. While you're improving a program it is also possible to break it. Wouldn't it be nice to back out? You can see where I'm going with this: my students would benefit enormously from version control. But there are some caveats: Our computing environment is locked down. Anything that depends on a central repository is suspect. Our students are incredibly overloaded. Not just classes but jobs, sports, music, you name it. For them to use a new tool it has to be incredibly easy and have obvious benefits. Our students do most work in pairs. Getting bits back and forth between accounts is problematic. Could this problem also be solved by distributed version control? Complexity is the enemy. I know setting up a CVS repository is too baffling---I myself still have trouble because I only do it once a year. I'm told SVN is even harder. Here are my comments on existing systems: I think central version control (CVS or SVN) is ruled out because our students don't have the administrative privileges needed to make a repository that they can share with one other student. (We are stuck with Unix file permissions.) Also, setup on CVS or SVN is too hard. darcs is way easy to set up, but it's not obvious how you share things. darcs send (to send patches by email) seems promising but it's not clear how to set it up. The introductory documentation for git is not for beginners. Like CVS setup, it's something I myself have trouble with. I'm soliciting suggestions for what source-control to use with beginning students. I suspect we can find resources to put a thin veneer over an existing system and to simplify existing documentation. We probably don't have resources to write new documentation. So, what's really easy to setup, commit, revert, and share changes with a partner but does not have to be easy to merge or to work at scale? A key constraint is that programming pairs have to be able to share work with each other and only each other, and pairs change every week. Our infrastructure is Linux, Solaris, and Windows with a netapp filer. I doubt my IT staff wants to create a Unix group for each pair of students. Is there an easier solution I've overlooked? (Thanks for the accepted answer, which beats the others on account of its excellent reference to Git Magic as well as the helpful comments.)

    Read the article

  • Generate a Strong Password using Mac OS X Lion’s Built-in Utility

    - by Usman
    You might’ve heard of the LinkedIn and last.fm security breaches that took place recently. Not to mention the thousands of websites that have been hacked till now. Nothing is invulnerable to hacking. And when something like that happens, passwords are leaked. Choosing a good password is essential. A good password generator can give you the best blend of alphanumeric and symbolic characters, making up a strong password. There are a variety of password generators out there, but not many people know that there’s one built right into Mac OS X Lion. Read on to see how you can generate a strong password without any third party application. To do this, open System Preferences. Click “Users & Groups”. How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Password Security: Short and Complex versus ‘Short or Lengthy’ and Less Complex

    - by Akemi Iwaya
    Creating secure passwords for our online accounts is a necessary evil due to the huge increase in database and account hacking that occurs these days. The problem though is that no two companies have a similar policy for complex and secure password creation, then factor in the continued creation of insecure passwords or multi-site use of the same password and trouble is just waiting to happen. Ars Technica decided to take a look at multiple password types, how users fared with them, and how well those password types held up to cracking attempts in their latest study. The password types that Ars Technica looked at were comprehensive8, basic8, and basic16. The comprehensive type required a variety of upper-case, lower-case, digits, and symbols with no dictionary words allowed. The only restriction on the two basic types was the number of characters used. Which type do you think was easier for users to adopt and did better in the two password cracking tests? You can learn more about how well users did with the three password types and the results of the tests by visiting the article linked below. What are your thoughts on the matter? Are shorter, more complex passwords better or worse than using short or long, but less complex passwords? What methods do you feel work best since most passwords are limited to approximately 16 characters in length? Perhaps you use a service like LastPass or keep a dedicated list/notebook to manage your passwords. Let us know in the comments!    

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • The Most Effective Learning Methods – The Results

    - by BuckWoody
    Yesterday I posted a blank graph and asked where you thought the labels should go for the most effective learning methods, according to a study they read to me and other teachers here at the University of Washington. Here are the labels in the correct order according to that study – and remember, “Teaching” here means one student explaining something to another: It isn’t really that surprising to learn that we comprehend best when we have to teach a subject to someone else, and you can see that the “participation factor” is the key in the learning methods. The real shocker was the retention level at the various learning modes – lecture was down near the single digits! What does this have to do with databases or the DBA? Well, we all need to learn new things – and many of us are asked to teach others a new task. To be a good teacher, we have to know how a student learns best – and of course that makes us better students as well. So next time you’re asked to transfer some knowledge to someone else, take a look at this chart first – and let me know how it affected your knowledge transfer. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Rendering shadow sprites in cocos2d-x

    - by lukeluke
    I am writing a 2D game with cocos2d-x. I want to put a "shadow" sprite on a background sprite using the equation: MAX(0, Cd*1 - Cs*S) where Cd is the destination color (that is, a background pixel), Cs is the source color (the shadow pixel) , S is the scale factor (between 0 and 1). The MAX() function is used to avoid negative results. This is a lighting effect: when the shadow sprite pixel is 0, there is no effect on the background pixel, otherwise, the background pixel becomes darker. Now, the only way that comes to my mind is to change the blending equation to GL_FUNC_SUBTRACT, but it doesn't compile with cocos2d-x (can't found it)... I would subclass the CCSprite class in order to implement the draw() method in order to change, when needed, the blending equation, call the original draw() method and restore the blending equation to its previous state at the end of the method. So my questions are two: how to use glBlendEquation() with cocos2d-x? Keep in mind that i am writing a game for iphone/android/windows. are shadows handled this way in 2D games? Thx

    Read the article

  • Watch Google’s I/O 2012 Developer Conference Live (Online) Starting June 27

    - by Asian Angel
    Google’s annual I/O conference begins on Wednesday this week and will be filled with exciting sessions about Android, Chrome, Google+, and more. To help you keep up with all the fun we have the links you need so that you can tune in with live streaming! Photo courtesy of Google I/O website. The keynote for Day 1 will begin at 9:30 a.m. PDT (U.S. time) and the keynote for the second day will begin at 10:00 a.m. PDT (U.S. time), so make sure to mark it on your schedule! Visit the blog post linked below for more details about signing up for Extended Events, the I/O mobile app, the liveblogging gadget, and more. SPECIAL NOTE: The Google blog post linked below was slightly ambiguous and listed both of the I/O URLs we have shown here, so make sure to keep a watch on both… How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • More Mobile Payments

    - by David Dorf
    In the previous post I discussed the Bump Payments from PayPayl, but that's not the only innovative way to make purchases using your phone. Verizon recently announced a partnership with Danal that allows shoppers to charge online purchases to their Verizon bill. For e-commerce sites that accept this type of payment, it's a two step process. At checkout, the shopper enters their mobile number and billing zip code. Then a SMS message is sent to the mobile phone that contains a one-time code that must be entered on the e-commerce site. This two-factor authentication seems pretty secure, and no pre-registration or credit card is necessary. There's a $25 a month maximum, but I bet the limit gets raised as Verizon gets more comfortable with security. Merchants are charged a fee similar to credit card fees. Another example of mobile payments is offered by BlingNation. Customers attach a small NFC sticker to their phones that allows them to "tap" the POS device to make a payment. The NFC chip is connected to their checking account, so the transaction is treated as a debit payment. Text messages are sent to the mobile that confirm the payments so shoppers can easily verify their purchases. BlingNation is working with banks like Adirondack Trust Company and The State Bank of La Junta in Colorado. Heck, you can even send money to inmates in the Arkansas prison system using your mobile phone now that the state of Arkansas supports payments via their mobile website. Everyone is getting into the act now.

    Read the article

  • Ground Control by David Baum

    - by JuergenKress
    As cloud computing moves out of the early-adopter phase, organizations are carefully evaluating how to get to the cloud. They are examining standard methods for developing, integrating, deploying, and scaling their cloud applications, and after weighing their choices, they are choosing to develop and deploy cloud applications based on Oracle Cloud Application Foundation, part of Oracle Fusion Middleware. Oracle WebLogic Server is the flagship software product of Oracle Cloud Application Foundation. Oracle WebLogic Server is optimized to run on Oracle Exalogic Elastic Cloud, the integrated hardware and software platform for the Oracle Cloud Application Foundation family. Many companies, including Reliance Commercial Finance, are adopting this middleware infrastructure to enable private cloud computing and its convenient, on-demand access to a shared pool of configurable computing resources. “Cloud computing has become an extremely critical design factor for us,” says Shashi Kumar Ravulapaty, senior vice president and chief technology officer at Reliance Commercial Finance. “It’s one of our main focus areas. Oracle Exalogic, especially in combination with Oracle WebLogic, is a perfect fit for rapidly provisioning capacity in a private cloud infrastructure.” Reliance Commercial Finance provides loans to tens of thousands of customers throughout India. With more than 1,500 employees accessing the company’s core business applications every day, the company was having trouble processing more than 6,000 daily transactions with its legacy infrastructure, especially at the end of each month when hundreds of concurrent users need to access the company’s loan processing and approval applications. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >