Search Results

Search found 22532 results on 902 pages for 'computer hardware'.

Page 47/902 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • ESXI with non standard hardware HDD issues

    - by Hurricanepkt
    I have 3 very underutilized servers that I am condensing to one of those shuttle PC's with VMWare ESXi The HDD seems to be the bottle neck right now (the light is almost always pure solid) right now I have a single 1TB Seagate 7200.11 connected by SATA. VMWare ESXi cannot detect it when running in AHCI mode, but does when running in IDE mode. I have read that IDE mode can give a 5% performance hit which might give me enough breathing room. However, I am open to setting up an external eSATA or some sort of raid to give me more than just the 5%. I am just weary of sinking some money into a bit of hardware without knowledge of whether it will work. Does anyone know of resources or procedures of how to get this working.

    Read the article

  • Graphics drivers could not find compatible graphics hardware

    - by selak
    I have a desktop with Asus P5KPS/EPU motherboard and Nvidia GF 9600 GT Graphic Adapter. I just set up new windows 7. My problem is that I cannot find a good dirver for my Graphic Adapter. I've downloaded driver from Nvidia.com which say support my GF 9600 GT, but when I installed it, it said This graphics drivers could not find compatible graphics hardware. I go into safe mode, uninstall Standard Graphics Adapter that auto installed then install my Nvidia driver, still not work. Anyone can show me how to solve this problem?

    Read the article

  • Mavericks Server Hardware suggestion [on hold]

    - by crystalWing
    I am an application developer in a small company. Recently, my boss asked me to setup a server for another company owned by him. He has 2 latest MAC PRO and he can provide me any hardware I want. He listed the following requirements: Failover is a must Should be capable to handle 20 vpn connections at the same time RAID 5 Remote Copy of backup data to different loaction I know this is a generic question that I shuoldn't ask here, but I really need help because comparing to Linux and MS server. There are not many resources available online. I read the APPLE PRO TRAINING book but it tells nothing about the above requirements.

    Read the article

  • My new laptop - with a really nice battery option

    - by Rob Farley
    It was about time I got a new laptop, and so I made a phone-call to Dell to discuss my options. I decided not to get an SSD from them, because I’d rather choose one myself – the sales guy tells me that changing the HD doesn’t void my warranty, so that’s good (incidentally, I’d love to hear people’s recommendations for which SSD to get for my laptop). Unfortunately this machine only has one HD slot, but I figure that I’ll put lots of stuff onto external disks anyway. The machine I got was a Dell Studio XPS 16. It’s red (which suits my company), but also has the Intel® Core™ i7-820QM Processor, which is 4 Cores/8 Threads. Makes for a pretty Task Manager, but nothing like the one I saw at SQLBits last year (at 96 cores), or the one that my good friend James Rowland-Jones writes about here. But the reason for this post is actually something in the software that comes with the machine – you know, the stuff that most people uninstall at the earliest opportunity. I had just reinstalled the operating system, and was going through the utilities to get the drivers up-to-date, when I noticed that one of Dell applications included an option to disable battery charging. So I installed it. And sure enough, I can tell the battery not to charge now. Clearly Dell see it as a temporary option, and one that’s designed for when you’re on a plane. But for me, I most often use my laptop with the power plugged in, which means I don’t need to have my battery continually topping itself up. So I really love this option, but I feel like it could go a little further. I’d like “Not Charging” to be the default option, and let me set it when I want to charge it (which should theoretically make my battery last longer). I also intend to work out how this option works, so that I can script it and put it into my StartUp options (so it can be the Default setting). Actually – if someone has already worked this out and can tell me what it does, then please feel free to let me know. Even better would be an external switch. I had a switch on my old laptop (a Dell Latitude) for WiFi, so that I could turn that off before I turned on the computer (this laptop doesn’t give me that option – no physical switch for flight mode). I guess it just means I’ll get used to leaving the WiFi off by default, and turning it on when I want it – might save myself some battery power that way too. Soon I’ll need to take the plunge and sync my iPhone with the new laptop. I’m a little worried that I might lose something – Apple’s messages about how my stuff will be wiped and replaced with what’s on the PC doesn’t fill me with confidence, as it’s a new PC that doesn’t have stuff on it. But having a new machine is definitely a nice experience, and one that I can recommend. I’m sure when I get around to buying an SSD I’ll feel like it’s shiny and new all over again! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • GDB hardware watchpoint very slow - why?

    - by Laurynas Biveinis
    On a large C application, I have set a hardware watchpoint on a memory address as follows: (gdb) watch *((int*)0x12F5D58) Hardware watchpoint 3: *((int*)0x12F5D58) As you can see, it's a hardware watchpoint, not software, which would explain the slowness. Now the application running time under debugger has changed from less than ten seconds to one hour and counting. The watchpoint has triggered three times so far, the first time after 15 minutes when the memory page containing the address was made readable by sbrk. Surely during those 15 minutes the watchpoint should have been efficient since the memory page was inaccessible? And that still does not explain, why it's so slow afterwards. The GDB is $ gdb --version GNU gdb (GDB) 7.0-ubuntu [...] Thanks in advance for any ideas as what might be the cause or how to fix/work around it.

    Read the article

  • Is the recent trend toward widescreen (16:9) computer monitors a plus or minus for programmers?

    - by DanM
    It's almost gotten to the point where you can't buy a conventional (4:3) monitor anymore. Pretty much everything is widescreen. This is fine for watching movies or TV, but is it good or bad for programming? My initial thoughts on the issue are that widescreens are a net negative for programmers. Here are some of the disadvantages I see: Poor space utiliziation One disadvantage of widescreens you can't argue with is that they offer poor space utilization for the amount of total pixels you get. For example, my Thinkpad, which I bought just before the widescreen craze, has a 15" monitor with a native resolution of 1600 x 1200. The newer 15.4" Thinkpads run at most 1680 x 1050. So (if you do the math) you get fewer pixels in a wider (but not shorter) package. With desktop monitors, you pay a price in terms of desk space used. Two 1680 x 1050 monitors will simply take up more of your desk than two 1600 x 1200 monitors (assuming equal dot pitch). More scrolling If you compare a 1680 x 1050 monitor to a 1600 x 1200 monitor, you get 80 extra pixels of width but 150 fewer pixels of height. The height reduction means you lose approximately 11 lines of code. That's less you can see on the screen at one time and more scrolling you have to do. This harms productivity, maybe not dramatically, but insidiously. Less room for wide panels Widescreens also mean you lose space for wide but short panels common in programming environments. If you use Visual Studio, for example, your code window will be that much shorter when viewing the Find Results, Task List, or Error List (all of which I use frequently). This isn't to say the 80 pixels of extra width you get with widescreen would never be useful, but I tend to keep my lines of code short, so seeing more lines would be more valuable to me than seeing fewer, longer lines. What do you think? Do you agree/disagree? Are you now using one or more widescreen monitors for development? What resolution are you running on each? Do you ever miss the height of the traditional 4:3 monitor? Would you complain if your monitors were one inch narrower but two inches taller?

    Read the article

  • Google Chrome hardware acceleration making game run slow

    - by powerc9000
    So I have been working on a game in HTML5 canvas and noticed that the games lags and performs much slower when hardware acceleration is turned on in Google Chrome then when it is turned off. You can try for yourself here From doing some profiling I see that the problem lies in drawImage. More specifically drawing one canvas onto another. I do a lot of this. Hardware Acceleration on. Hardware Acceleration off. Is there something fundamental I am missing with one canvas to another? Why would the difference be that profound?

    Read the article

  • HTG Explains: Why is Printer Ink So Expensive?

    - by Chris Hoffman
    Printer ink is expensive, more expensive per drop than fine champagne or even human blood. If you haven’t gone paperless, you’ll notice that you’re paying a lot for new ink cartridges — more than seems reasonable. Purchasing the cheapest inkjet printer and buying official ink cartridge replacements is the most expensive thing you can do. There are ways to save money on ink if you must continue to print documents. Cheap Printers, Expensive Ink Ink jet printers are often very cheap. That’s because they’re sold at cost, or even at a loss — the manufacturer either makes no profit from the printer itself or loses money. The manufacturer will make most of its money from the printer cartridges you buy later. Even if the company does make a bit of money from each printer sold, it makes a much larger profit margin on ink. Rather than selling you a printer that may be rather expensive, they want to sell you a cheap printer and make money on an ongoing basis by providing expensive printer ink. It’s been compared to the razor model — sell a razor cheaply and mark up the razor blades. Rather than making a one-time profit on the razor, you’ll make continuing profit as the customer keeps buying razor blade replacements — or ink, in this case. Many printer manufacturers go out of their way to make it difficult for you to use unofficial ink cartridges, building microchips into their official ink cartridges. If you use an unofficial cartridge or refill an official cartridge, the printer may refuse to use it. Lexmark once argued in court that unofficial microchips that enable third-party ink cartridges would violate their copyright and Lexmark has argued that creating an unofficial microchip to bypass this restriction on third-party ink would violate Lexmark’s copyright and be illegal under the US DMCA. Luckily, they lost this argument. What Printer Companies Say Printer companies have put forth their own arguments in the past, attempting to justify the high cost of official ink cartridges and microchips that block any competition. In a Computer World story from 2010, HP argued that they spend a billion dollars each year on “ink research and development.” They point out that printer ink “must be formulated to withstand heating to 300 degrees, vaporization, and being squirted at 30 miles per hour, at a rate of 36,000 drops per second, through a nozzle one third the size of a human hair. After all that it must dry almost instantly on the paper.” They also argue that printers have become more efficient and use less ink to print, while third-party cartridges are less reliable. Companies that use microchips in their ink cartridges argue that only the microchip has the ability to enforce an expiration date, preventing consumers from using old ink cartridges. There’s something to all these arguments, sure — but they don’t seem to justify the sky-high cost of printer ink or the restriction on using third-party or refilled cartridges. Saving Money on Printing Ultimately, the price of something is what people are willing to pay and printer companies have found that most consumers are willing to pay this much for ink cartridge replacements. Try not to fall for it: Don’t buy the cheapest inkjet printer. Consider your needs when buying a printer and do some research. You’ll save more money in the long run. Consider these basic tips to save money on printing: Buy Refilled Cartridges: Refilled cartridges from third parties are generally much cheaper. Printer companies warn us away from these, but they often work very well. Refill Your Own Cartridges: You can get do-it-yourself kits for refilling your own printer ink cartridges, but this can be messy. Your printer may refuse to accept a refilled cartridge if the cartridge contains a microchip. Switch to a Laser Printer: Laser printers use toner, not ink cartridges. If you print a lot of black and white documents, a laser printer can be cheaper. Buy XL Cartridges: If you are buying official printer ink cartridges, spend more money each time. The cheapest ink cartridges won’t contain much ink at all, while larger “XL” ink cartridges will contain much more ink for only a bit more money. It’s often cheaper to buy in bulk. Avoid Printers With Tri-Color Ink Cartridges: If you’re printing color documents, you’ll want to get a printer that uses separate ink cartridges for all its colors. For example, let’s say your printer has a “Color” cartridge that contains blue, green, and red ink. If you print a lot of blue documents and use up all your blue ink, the Color cartridge will refuse to function — now all you can do is throw away your cartridge and buy a new one, even if the green and red ink chambers are full. If you had a printer with separate color cartridges, you’d just have to replace the blue cartridge. If you’ll be buying official ink cartridges, be sure to compare the cost of cartridges when buying a printer. The cheapest printer may be more expensive in the long run. Of course, you’ll save the most money if you stop printing entirely and go paperless, keeping digital copies of your documents instead of paper ones. Image Credit: Cliva Darra on Flickr     

    Read the article

  • Advice for a computer science sophomore in college?

    - by RDas
    Hi Everyone! I'm a sophomore in college majoring in Computer Science and Math. I have always loved programming. I started programming in C when I was nine years old and over the years I've picked up Visual Basic, C#, Java, C++, JavaScript, Objective-C, Python, Ruby, elementary Haskell and elementary Erlang, and I learned Perl back in the day which I've mostly forgotten. I have not done much network programming. I have done CGI programming, but that was about six/seven years ago. I've done some socket programming and written (school) programs to do interprocess communication, which I understood and liked. I'm taking a course on client/server programming and another one on network security next semester, which I am really looking forward to. I'm seeking advice on how to proceed with future learning. I've mostly done application (mobile and desktop) development, not much of web development. I'd like to pick up some web development this coming semester. Since I know Ruby and Python, should I start by learning Django and/or Rails? Any other suggestions on starting web development? I have a good understanding of HTML and CSS. Also, I'd also like to know how hard it is to pick up and be good (read: productive) in functional programming languages coming from a purely structured/object oriented background? I've been reading up on Erlang and Haskell, and I'd like to know your opinions on whether it's worth my time trying to learn them. What about Lisp, Scheme and other functional languages? Any help/ideas would be really appreciated.

    Read the article

  • Nervous about the "real" world

    - by Randy
    I am currently majoring in Computer Science and minoring in mathematics (the minor is embedded in the major). The program has a strong C++ curriculum. We have done some UNIX and assembly language (not fun) and there is C and Java on the way in future classes that I must take. The program I am in did not use the STL, but rather a STL-ish design that was created from the ground up for the program. From what I have read on, the STL and what I have taken are very similar but what I used seemed more user friendly. Some of the programs that I had to write in C++ for assignments include: a password server that utilized hashing of the passwords for security purposes, a router simulator that used a hash table and maps, a maze solver that used depth first search, a tree traveler program that traversed a tree using levelorder, postorder, inorder, selection sort, insertion sort, bit sort, radix sort, merge sort, heap sort, quick sort, topological sort, stacks, queues, priority queues, and my least favorite, red-black trees. All of this was done in three semesters which was just enough time to code them up and turn them in. That being said, if I was told to use a stack to convert an equation to infix notation or something, I would be lost for a few hours. My main concern in writing this is when I graduate and land an interview, what are some of the questions posed to assess my skills? What are some of the most important areas of computer science that are prevalent in the field? I am currently trying to get some ideas of programs I can write in C++ that interest and challenge me to keep learning the language. A sodoku solver came to mind but am lost as to where to start. I apologize for the rant, but I'm just a wee bit nervous about the future. Any tips are appreciated.

    Read the article

  • Running hardware with only 32bit drivers in 64bit windows

    - by Howard
    Recently we had to upgrade a system to handle added HD IP cameras. This upgrade involved an entirely new computer build with the exception of a rather pricey Geovision DVR (PCI/GV1480 series). Apparently while these cards do support Windows 7, they do not support Windows 7 x64. I'm stuck between a rock and a hard place here trying to figure out how we can remedy this, is there a virtualization solution that will allow devices with driver issues to passthru to the Guest OS? I was thinking XPMode may work for this solution however I am unsure if it runs 32bit or 64bit and if it'll allow driver-issue devices to pass to it. Any help would be greatly appreciated, Best Regards, Howard

    Read the article

  • Has Little Endian won?

    - by espertus
    When teaching recently about the Big vs. Little Endian battle, a student asked whether it had been settled, and I realized I didn't know. Looking at the Wikipedia article, it seems that the most popular current OS/architecture pairs use Little Endian but that Internet Protocol specifies Big Endian for transferring numeric values in packet headers. Would that be a good summary of the current status? Do current network cards or CPUs provide hardware support for switching byte order?

    Read the article

  • How to learn the math behind the code?

    - by Solomon Wise
    I am a 12 year old who has recently gotten into programming. (Although I know that the number of books you have read does not determine your programming competency or ability, just to paint a "map" of where I am in terms of the content I know...) I've finished the books: Python 3 For Absolute Beginners Pro Python Python Standard Library by Example Beautiful Code Agile Web Development With Rails and am about halfway into Programming Ruby. I have written many small programs (One that finds which files have been updated and deleted in a directory, one that compares multiple players' fantasy baseball value, and some text based games, and many more). Obviously, as I'm not some sort of child prodigy, I can't take a formal Computer Science course until high school. I really want to learn computer science to increase my knowledge about the code, and the how the code runs. I've really become interested in the math part after reading the source code for Python's random module. Is there a place where I can learn CS, or programming math online for free, at a level that would be at least partially understandable to a person my age?

    Read the article

  • I need advice on laptop purchase for university [closed]

    - by Systemic33
    I'm currently in University studying Computer Science/IT/Information Technology. And this first year i've managed to do with the laptop I had; an ASUS Eee PC 1000H with a 10.1" screen. But this is getting way too underpowered and small for programming more than just quick programming introduction excercises. So I'm looking to buy a more suitable laptop. It's not supposed to be a desktop replacement though, since I've got a pretty good desktop already with a 24" monitor. So the kinda laptop I want to buy is one suited for university. If this bears any significance, I'm working in Java atm, but I will likely work with lots of other things incl. web development. I'm looking to spend about $1700 plus/minus. And it should be powerful/big enough for working on programming projects as well as the usual university stuff like MATLAB, Maple, etc out "in the field", and sometimes for maybe a week when visiting my parents. What I'm looking at right now is the ASUS Zenbook UX31A with the 1920 x 1080 resolution on 13.3" IPS display. But I'm kinda nervous that this will be too petite for programming. In essence i'm looking for a powerfull computer, that has good enough battery, and looks good. I would love suggestions or any type of feedback, either with maybe a better choice, or input on how its like programming on 13" laptops. Very much thanks in advance for anyone who even went through all that! PS. I don't want a mac, or my inner karma would commit Seppuku xD But experiences from working on the 13" Macbook Air would kinda be equivalent to the Zenbook i'm considering, so I would love to hear that. tl;dr The quick brown fox jumps over the lazy dog ;)

    Read the article

  • Secure Your Wireless Router: 8 Things You Can Do Right Now

    - by Chris Hoffman
    A security researcher recently discovered a backdoor in many D-Link routers, allowing anyone to access the router without knowing the username or password. This isn’t the first router security issue and won’t be the last. To protect yourself, you should ensure that your router is configured securely. This is about more than just enabling Wi-Fi encryption and not hosting an open Wi-Fi network. Disable Remote Access Routers offer a web interface, allowing you to configure them through a browser. The router runs a web server and makes this web page available when you’re on the router’s local network. However, most routers offer a “remote access” feature that allows you to access this web interface from anywhere in the world. Even if you set a username and password, if you have a D-Link router affected by this vulnerability, anyone would be able to log in without any credentials. If you have remote access disabled, you’d be safe from people remotely accessing your router and tampering with it. To do this, open your router’s web interface and look for the “Remote Access,” “Remote Administration,” or “Remote Management” feature. Ensure it’s disabled — it should be disabled by default on most routers, but it’s good to check. Update the Firmware Like our operating systems, web browsers, and every other piece of software we use, router software isn’t perfect. The router’s firmware — essentially the software running on the router — may have security flaws. Router manufacturers may release firmware updates that fix such security holes, although they quickly discontinue support for most routers and move on to the next models. Unfortunately, most routers don’t have an auto-update feature like Windows and our web browsers do — you have to check your router manufacturer’s website for a firmware update and install it manually via the router’s web interface. Check to be sure your router has the latest available firmware installed. Change Default Login Credentials Many routers have default login credentials that are fairly obvious, such as the password “admin”. If someone gained access to your router’s web interface through some sort of vulnerability or just by logging onto your Wi-Fi network, it would be easy to log in and tamper with the router’s settings. To avoid this, change the router’s password to a non-default password that an attacker couldn’t easily guess. Some routers even allow you to change the username you use to log into your router. Lock Down Wi-Fi Access If someone gains access to your Wi-Fi network, they could attempt to tamper with your router — or just do other bad things like snoop on your local file shares or use your connection to downloaded copyrighted content and get you in trouble. Running an open Wi-Fi network can be dangerous. To prevent this, ensure your router’s Wi-Fi is secure. This is pretty simple: Set it to use WPA2 encryption and use a reasonably secure passphrase. Don’t use the weaker WEP encryption or set an obvious passphrase like “password”. Disable UPnP A variety of UPnP flaws have been found in consumer routers. Tens of millions of consumer routers respond to UPnP requests from the Internet, allowing attackers on the Internet to remotely configure your router. Flash applets in your browser could use UPnP to open ports, making your computer more vulnerable. UPnP is fairly insecure for a variety of reasons. To avoid UPnP-based problems, disable UPnP on your router via its web interface. If you use software that needs ports forwarded — such as a BitTorrent client, game server, or communications program — you’ll have to forward ports on your router without relying on UPnP. Log Out of the Router’s Web Interface When You’re Done Configuring It Cross site scripting (XSS) flaws have been found in some routers. A router with such an XSS flaw could be controlled by a malicious web page, allowing the web page to configure settings while you’re logged in. If your router is using its default username and password, it would be easy for the malicious web page to gain access. Even if you changed your router’s password, it would be theoretically possible for a website to use your logged-in session to access your router and modify its settings. To prevent this, just log out of your router when you’re done configuring it — if you can’t do that, you may want to clear your browser cookies. This isn’t something to be too paranoid about, but logging out of your router when you’re done using it is a quick and easy thing to do. Change the Router’s Local IP Address If you’re really paranoid, you may be able to change your router’s local IP address. For example, if its default address is 192.168.0.1, you could change it to 192.168.0.150. If the router itself were vulnerable and some sort of malicious script in your web browser attempted to exploit a cross site scripting vulnerability, accessing known-vulnerable routers at their local IP address and tampering with them, the attack would fail. This step isn’t completely necessary, especially since it wouldn’t protect against local attackers — if someone were on your network or software was running on your PC, they’d be able to determine your router’s IP address and connect to it. Install Third-Party Firmwares If you’re really worried about security, you could also install a third-party firmware such as DD-WRT or OpenWRT. You won’t find obscure back doors added by the router’s manufacturer in these alternative firmwares. Consumer routers are shaping up to be a perfect storm of security problems — they’re not automatically updated with new security patches, they’re connected directly to the Internet, manufacturers quickly stop supporting them, and many consumer routers seem to be full of bad code that leads to UPnP exploits and easy-to-exploit backdoors. It’s smart to take some basic precautions. Image Credit: Nuscreen on Flickr     

    Read the article

  • I need some career/education advice regarding computer science [on hold]

    - by user2521987
    So I'm a senior mathematics major this fall and I have only taken three CS classes (Java I, Java II, and C++). This summer, I am participating in a mathematics REU (Research Experience for Undergraduates), and I program in C++ about 8 hours a day...and I find that I absolutely love it. I love using programming to solve math problems in my research. I think I want to pursue a career in programming. I have a few options Stay at my university an extra 1-1.5 years (beyond the 4) and do a double major in Math/CS. This will put me in up to around 7-10k in debt (currently I have no debt and am scheduled to graduate debt free). Then apply to a masters in CS. Apply directly to a masters in CS from a math undergraduate degree. I don't like this idea because I likely won't get into a good program or funded with such little background. Go to graduate school, funded, in applied mathematics and try to further my knowledge in computer science while there. Then apply to a masters in CS. I'm not sure if 1 or 3 would be better. My end goal would be to go to a top 20-30 CS graduate program and to get a cool, good job. What would you recommend?

    Read the article

  • Graduating soon with a computer science degree, but have unique circumstances [closed]

    - by Donnie
    I joined the Navy in 1998, and was admitted into Nuclear Power Training. I got my electrician's mate certificate, but was put on medical hold when I was in Nuclear Power Training. I was sent to the Naval Hospital, and received a medical (honorable) discharge in the middle of 2000. I decided to stay at home and raise my son, and my girlfriend worked. a few years ago, I decided that I want to work as a programmer, so I went to college and will soon be graduating with a degree in computer science. I hope to finish with a relatively high GPA, 3.8 or 3.9. My question is this: How much, if any, of my Navy experience should I put on my resume? And how do I explain my nine year gap as a stay at home dad? Do I even try to explain it? I know recent college graduates typically have no experience, but obviously I'm not the typical college graduate. Will my long absence from working, or my relatively short duration in the Navy hurt my chances? Should I just put the college on my resume, and hope that HR thinks I'm younger than I am? Obviously, then, my age would show at the interview and there would be questions. Any help is appreciated.

    Read the article

  • The Most Ridiculous Computer Cameos of All Time

    - by Jason Fitzpatrick
    For the last half century computers have played all sorts of major and minor roles in movies; check out this collection to see some of the more quirky and out-of-place appearances. Wired magazine rounds up some of the more oddball appearances of computers in film. Like, for example, the scene shown above from Soylent Green: Spoiler alert: Soylent Green is people! But that’s not the only thing we’re gonna spoil. Soylent Green is set in 2022, and at one point, you’ll notice that a government facility is still using a remote calculator that plugs into the CDC 6600, a machine that was state-of-the-art in 1971. Come to think of it, we should scratch this from the list. This is pretty close to completely accurate. Hit up the link below to check out the full gallery, including a really interesting bit about how the U.S. Government’s largest computer project–once decommissioned and sold as surplus–ended up on the sets of dozens of movies and television shows. The Most Wonderfully Ridiculous Movie Computers of All Time [Wired] Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • Why is CS never a topic of conversation of the layman? [closed]

    - by hydroparadise
    Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues? Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into "What is a processor?" or "What is RAM?", or "What is an OS?". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs, yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap? Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please.

    Read the article

  • why doesn't my computer resume after sleeping overnight?

    - by bamdad
    i'm having a weird, weird bug that's been haunting me since 11.10. if i listen to music or watch a video and my computer automatically goes to sleep at night, it won't properly resume in the morning. otherwise, suspend and resume works just fine. what happens is that the wi-fi and bluetooth indicator (that turns from white to orange when suspending) stays orange, the display doesn't turn on, and the only option i have is to hard reset the machine. here's what i've tried so far: installing (and uninstalling and reinstalling) laptop-mode-tools switching the proprietary wireless driver (broadcom-wl) to the open source one (brcmsmac & bcma) and back unloading (and blacklisting) all bluetooth modules (rfcomm, btusb, bnep, bluetooth) and stopping (# stop bluetooth) and disabling (# echo 'manual' /etc/init/bluetooth.override) the bluetooth service creating a custom pm sleep action as suggested here: http://ubuntuforums.org/showthread.php?p=11926504 not watching youtube / any stuff that uses flash before going to sleep (i have flashblock, and i checked $ ps aux | grep flash) because i suspected flash to be the culprit trying out different versions of fglrx (the one from the repos, then installing the latest one from amd's site via generated .deb files, then back to the official ones) none of these worked. i remember back in the days of 10.04, there was a gconf key called network sleep: i thought about disabling that, since re-enabling the wireless card seems to be the problem (according to the indicator led), but the option appears to be missing from gnome 3 (unity-2d, whatever). does anyone have any ideas? thanks, bamdad

    Read the article

  • Majoring in computer science, but i'm not to sure I'm in the right field [closed]

    - by user74340
    Throught out my high school years and first year in college, I never thought of studying computer science. I studied biology and chemistry during my first year, and I didn't like the research, nor any type of medical professionals. So I took an introductory CS course, and loved the diverse roles this field can have. So I declare CS as my major. I finished first, and second year CS courses. Then now, I'm doing my co-op(intern) as a web developer. During my first and second year, I was always just an average student. My grades is around low B. But I put so much effort to understand my course' materials. I see many brilliants peers who not only excel at what they do, but have the passion. So I always doubt myself if I don't belong in this field. I'm not good at math, I usually get Cs on my math courses. My internship (a corporate developer job) is okay. But doesn't want to work like this after my graduation). Some aspects of CS that I like is HCI. In my experience in programming, and group projects, I enjoyed designing User interface, and thinking of user experience. I'm also thinking of taking some psychology courses.. I would appreciate any criticism, or advices.

    Read the article

  • Master's in Software Engineering vs. Master's in Computer Science: which degree is preferred by empl

    - by dbarker
    I've been building software professionally for 7 years and am considering a master's degree. I understand the difference between these two degrees as simply: MSCS is the theory while MSE is the practice. I'm equally interested in both and would be happy with either, although I'm curious how these degrees rank in the eyes of a potential employer. I could see two views that a hiring manager could possibly take: a MSCS is loftier and has an implied knowledge of Software Engineering an MSE is more practical and has an implied knowledge of Computer Science In my own experience I've seen both MSCS degree holders than cannot program at all while others are among the best programmers I've met, so of course actual ability will depend on the individual. My question is about the "on paper" value of these two degrees when seeking a job. All things considered, is one degree more hirable, higher-paying than the other?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >