Search Results

Search found 3925 results on 157 pages for 'smart guy'.

Page 119/157 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • CoreStore Encryption Error on Mac Lion

    - by Michael
    I am trying to encrypt an external drive using diskutil CoreStorage on Mac Lion 10.7.4. I thought the only requirements were that the drive have GUID partition scheme and Journaled HFS+ file system. I think my drive is configured accordingly but when I type the following command I get an error message back: Michaels-MacBook-Pro:~ Michael$ diskutil cs convert disk2 -passphrase TestPassword Error converting disk to CoreStorage: The given file system is not supported on Core Storage (-69756) Here are the details reported for the drive in question: Michaels-MacBook-Pro:~ Michael$ diskutil list disk2 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Test1 499.8 GB disk2s2 Michaels-MacBook-Pro:~ Michael$ diskutil list disk2 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Test1 499.8 GB disk2s2 Michaels-MacBook-Pro:~ Michael$ diskutil info disk2s2 Device Identifier: disk2s2 Device Node: /dev/disk2s2 Part of Whole: disk2 Device / Media Name: Test1 Volume Name: Test1 Escaped with Unicode: Test1 Mounted: Yes Mount Point: /Volumes/Test1 Escaped with Unicode: /Volumes/Test1 File System Personality: Journaled HFS+ Type (Bundle): hfs Name (User Visible): Mac OS Extended (Journaled) Journal: Journal size 40960 KB at offset 0xe8e000 Owners: Disabled Partition Type: Apple_HFS OS Can Be Installed: Yes Media Type: Generic Protocol: FireWire SMART Status: Not Supported Volume UUID: 1024D0B8-1C45-3057-B040-AE5C3841DABF Total Size: 499.8 GB (499763888128 Bytes) (exactly 976101344 512-Byte-Blocks) Volume Free Space: 499.3 GB (499315826688 Bytes) (exactly 975226224 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: No Ejectable: Yes Whole: No Internal: No I'm a little concerned that the "Partition Type: Apple_HFS" entry is causing the problem, but I don't know how to change that. I only seem to be able to control the "File System Personality: Journaled HFS+" in Disk Utility. Can anyone shed some light on this for me?

    Read the article

  • How to diagnose disk errors when disk appears to be ok?

    - by Kylotan
    I have a six-month-old 1TB Seagate drive formatted into 2 NTFS partitions, and the disk appeared to be failing with Windows dropping down from UDMA to PIO mode, reporting Delayed Write Errors, and hanging Explorer when browsing directories. My initial suspicion was that the disk was dying. However, on further examination it appears that Ubuntu, which doesn't write to the volume frequently like Windows does, was able to read the disk properly and retrieve all the data intact, saving me from having to use an older backup. Finally, running the Seatools DOS diagnostic reported that the disk has no problems, ie. SMART errors and no bad sectors, apparently. This, in combination with the relative youth of the disk, suggests that something else is broken. The cable? The PSU? The integrated disk controller? But what would be a good way to diagnose the problem without risking damaging the data? I intend to extract the disk and try it in an external eSATA enclosure and see if the write errors cease, but in the event of the disk appearing to be fine, I would like to be able to confirm what part of the hardware is actually broken here in order to know just what needs replacing. Are there any good ways to go about this?

    Read the article

  • How to diagnose disk errors when disk appears to be ok?

    - by Kylotan
    I have a six-month-old 1TB Seagate drive formatted into 2 NTFS partitions, and the disk appeared to be failing with Windows dropping down from UDMA to PIO mode, reporting Delayed Write Errors, and hanging Explorer when browsing directories. My initial suspicion was that the disk was dying. However, on further examination it appears that Ubuntu, which doesn't write to the volume frequently like Windows does, was able to read the disk properly and retrieve all the data intact, saving me from having to use an older backup. Finally, running the Seatools DOS diagnostic reported that the disk has no problems, ie. SMART errors and no bad sectors, apparently. This, in combination with the relative youth of the disk, suggests that something else is broken. The cable? The PSU? The integrated disk controller? But what would be a good way to diagnose the problem without risking damaging the data? I intend to extract the disk and try it in an external eSATA enclosure and see if the write errors cease, but in the event of the disk appearing to be fine, I would like to be able to confirm what part of the hardware is actually broken here in order to know just what needs replacing. Are there any good ways to go about this?

    Read the article

  • Google, typography, and cognitive fluency for persuasion

    - by Roger Hart
    Cognitive fluency is - roughly - how easy it is to think about something. Mere Exposure (or familiarity) effects are basically about reacting more favourably to things you see a lot. Which is part of why marketers in generic spaces like insipid mass-market lager will spend quite so much money on getting their logo daubed about the place; or that guy at the bus stop starts to look like a dating prospect after a month or two. Recent thinking suggests that exposure effects likely spin off cognitive fluency. We react favourably to things that are easier to think about. I had to give tech support to an older relative recently, and suggested they Google the problem. They were confused. They could not, apparently, Google the problem, because part of it was that their Google toolbar had mysteriously vanished. Once I'd finished trying not to laugh, I started thinking about typography. This is going somewhere, I promise. Google is a ubiquitous brand. Heck, it's a verb, and their recent, jaw-droppingly well constructed Paris advert is more or less about that ubiquity. It trades on Google's integration into any information-seeking behaviour. But, as my tech support encounter suggests, people settle into comfortable patterns of thinking about things. They build schemas, and altering them can take work. Maybe the ubiquity even works to cement that. Alongside their online effort, Google is running billboard campaigns to advertise Chrome, a free product in a crowded space. They are running these ads in some kind of kooky Calibri / Comic Sans hybrid. Now, at first it seems odd that one of the world's more ubiquitous brands needs to run a big print campaign in public places - surely they have all the fluency they need? Well, not so much. Chrome, after all, is not the same as their core product, so there's some basic awareness work to do, and maybe a whole new batch of exposure effect to try and grab. But why the typeface? It's heavily foregrounded, and the ads are extremely textual. Plus, don't we all know that jovial, off-beat fonts look unprofessional, or something? There's a whole bunch of people who want (often rightly) to ban Comic Sans I wonder, though. Are Google trying to subtly disrupt cognitive fluency? There's an interesting paper (pdf) about - among other things - the effects of typography on they way people answer survey questions. Participants given the slightly harder to read question gave more abstract answers. The paper references other work suggesting that generally speaking, less-fluent question framing elicits more considered answers. The Chrome ad typeface is less fluent for print. Reactions may therefore be more considered, abstract, and disruptive. Is that, in fact, what Google need? They have brand ubiquity, but they want here to change accustomed behaviour, to get people to think about changing their browser. Is this actually a very elegant piece of persuasive information design? If you think about their "what is a browser?" vox pop research video, there's certainly a perceptual barrier they're going to have to tackle somehow.

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • How do you get Windows 7 to show time remaining in the battery meter?

    - by MrDaniel
    Running Microsoft Windows 7 Home Premium on a HP Laptop. The system tray power meter never shows the time remaining in the system tray. Only really ever show a percentage remaining number as pictured. The windows help documentation on the "battery meter" seems to indicate that it should display a time remaining indicator, is this accurate? How accurate is the battery meter? The accuracy of what the battery meter reports—what percentage of a full charge remains and how long you can use your laptop before you must plug it in—depends on several factors. Most of these factors fall into the following two categories: What you use the laptop for. Because some activities drain the battery faster than others (for example, watching a DVD consumes more power than reading and writing e-mail), alternating between activities that have significantly different power requirements changes the rate at which your laptop uses battery power. This can vary the estimate of how much battery charge remains. Battery hardware and sensor circuitry. Newer, "smart" batteries are equipped with circuitry that calculates the measurements of charge remaining and reports the information to the battery meter. Older batteries use less sophisticated circuitry and might be less accurate.

    Read the article

  • My hard drive seems to be overheating... what should I do?

    - by George Edison
    After a cold boot, the hard drive in my notebook jumps to 56? within an hour or so of idling. Is 56? a cause for alarm? Notes: The notebook is on a flat desk and none of the vents are obstructed. The video card is currently at 55? and the CPU at 50?. It's a Western Digital 250GB hard drive. SMART reports the drive healthy but does warn that: Edit: this problem had a very surprise ending. I inverted the notebook and unscrewed some of the panels on the back (there was one covering the hard drive, and one that provided access to the memory). I couldn't see any dust, so I simply screwed everything back together and powered it on... and it worked! The temperature is now staying at 46?, and it feels notable cooler to the touch. So I can only assume that some internal fan was malfunctioning or something. Whatever the case, it's working now so I won't complain. Edit: I have an SSD now, so temperature isn't as big an issue as it was when I had a mechanical drive.

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    Hi there. I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • do not require smtp authentication for a specific domain using hMail server

    - by toryan
    One of my clients has a needlessly complex e-mail setup for a couple of domains, which is causing problems when they try to send e-mail between them. They have a couple of domains where mail follows a slightly weird path: Users connect to an Exchange server to send e-mail The exchange server relays the message to an ISP-owned SMTP server as a smart host The ISPs server delivers the mail to the mail exchanger specified in DNS The mail exchanger is another server that runs hMailServer The Exchange server connects to the hMail server via POP3 and retrieves the messages. The problem arises when they send mail between addresses in the same domain, or two addresses that are present on the hMail server. hMail requires SMTP authentication when sending from local to local addresses, so the messages don't arrive. Removing SMTP authentication isn't really an option, as the server has been the target of spam being sent from spoofed local addresses. SMTP authentication prevents this. It is possible to add the ISP's mail server as an IP range with specific rules, but this seems inelegant. Bearing in mind I only have access to the hMail server and not the Exchange server, is there a better way of going about this?

    Read the article

  • Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace

    - by Nick
    I have an laptop top hard drive I was trying to use to my new media computer. The case is small and can accommodate for 2 2.5" drives, no 3.5" drives. I had been using the hard drive as storage hard drive until now. When I go to install Windows on the hard drive first I'm prompted at the bios of: Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace. And then again in the Windows Setup, informing me that the hard drive is bad. So I did a full format of the drive and tried again. Same error. So I took it out and hooked it back up to my other computer via an Sata usb adapter kit (maybe the cause?). The hard drive is recognized fine and when I scanned it for errors by going: right click -> properties -> tools -> error checking It returns that the hard drive is fine. I have tried 3 different SATA cables and multiple jumpers. When I plugged in my 1.5 tb 3.5" drive the computer that gives me the S.M.A.R.T. error on the 2.5" drive, recognizes it with no problems. Any ideas on why this is happening and how I can fix it?

    Read the article

  • UPS with a HP Proliant server

    - by Groo
    We placed a EATON Ellipse Max 1500 (900W) as the UPS for our HP Proliant ML350 G6. Upon first power failure (actually we only moved the UPS' input plug to a different socket), server immediatelly turned off, and the Health LED turned red and started blinking. UPS was in operation for about a week before that, with battery fully charged to 100%. Since our server's hot-plug supply is 460W, we are pretty sure we haven't overloaded it, the server was completely idle at that time (no web or win apps running except Windows Server core services). Then we tried to do the same with a different, no-name older PC (Core 2 Duo, 2Gb RAM) with a generic power supply (not sure what the power is) and it continued working when we pulled the plug out. UPS load was less than 15% (measured in the provided Eaton utility). We measured the UPS' output voltage using a smart oscilloscope and the THD of the UPS output waveform turned out to be 40%. Did you have similar experiences? Could this be a faulty UPS? Or a faulty power supply? Or some HP sensors configured to trigger too strictly? I wouldn't like replacing this UPS with the same brand, to get same results. [Edit] I also tried to do this while the server is turned off. While the UPS is working on battery, server will not start - as soon as I press the power button, Health LED starts blinking red.

    Read the article

  • Opening and Testing Ports on Modem > Router Connection

    - by JakeTheSnake
    Working off of my last question, I can access my server's FTP over the LAN but not over the internet. I'm using Filezilla on port 666. My router/modem configuration is as such (similar to other post): 1) Modem connects to WAN 2) WAN port on modem connects to LAN port on Router 3) Modem internal IP address is 192.168.0.254 4) Router internal IP address is 192.168.0.1 5) Modem has DHCP turned OFF 6) Router has DHCP turned ON 7) Router is running Tomato firmware and it's set as 'Router' (not 'Gateway') 8) The internet is working (just had to say that) I've set up port forwarding both on the modem and router - both route port 666 to the IP address of 192.168.0.3 (TCP); that is the IP address of the server which has FileZilla running. I don't know if that's hindering anything but I've also tried it with just the modem and just the router...same result. I've also tried setting the server to be DMZ (both on router and modem). Neither router nor modem have anything in their logs about denying inbound traffic on port 666 so my ability to troubleshoot stops there. I've tried contacting my ISP (Telus, running on mobility plan...it's a "Smart" Hub) but they weren't much help. They said they only block port 25 and 80 and maybe a few others, but not most ports. I test whether or not the port is open by going to canyouseeme.org - I don't know whether or not that would produce a 'connection refused' result just based on the fact that the FTP requires a login...I'm not well versed on this matter. FWIW, sometimes I get a 'connection refused' error on canyouseeme.org but mostly it's 'connection timed out'. I don't know what else to do at this point.

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • Brainless Backups

    - by Jesse
    I’m a software developer by trade which means to my friends and family I’m just a “computer guy”. It’s assumed that I know everything about every facet of computing from removing spyware to replacing hardware. I also can do all of this blindly over the phone or after hearing a five to ten word description of the problem over dinner ;-) In my position as CIO of my friends and families I’ve been in the unfortunate position of trying to recover music, pictures, or documents off of failed hard drives on more than one occasion. It’s not a great situation for anyone, and it’s always at these times that the importance of backups becomes so clear. Several months back a friend of mine found himself in this situation. The hard drive on his 8 year old laptop failed and took a good number of his digital photos with it. I think most folks can deal with losing some of their music and even some of their documents, but it really stings to lose pictures of past events and loved ones. After ordering a new laptop, my friend went out and bought an external hard drive so that he could start keeping a backup of his data. As fate would have it, several months later the drive in his new laptop failed and he learned the hard way that simply buying the external hard drive isn’t enough… you actually have to copy your stuff over every once in awhile! The importance of backup and recovery plans is (hopefully) well known in IT organizations. Well executed backup plans are in place, and hopefully the backup and recovery process is tested regularly. When you’re talking about users at home, however, the need for these backups is often understood far too late. Most typical users can’t be expected to remember to backup their data regularly and also don’t always have the know-how to setup automated backups. For my friends and family members in this situation I recommend tools like Dropbox, Carbonite, and Mozy. Here’s why I like them: They’re affordable: Dropbox and Mozy both have free offerings, though most people with lots of music and/or photos to backup will probably exceed the storage limitations of those free plans pretty quickly. Still, all three offer pretty affordable monthly or yearly plans. In my opinion, Carbonite’s unlimited storage plan for $50-$60 per year is the best value around. They’re easy to setup: Both Dropbox and Carbonite are very easy to get setup and start using. I’ve never used Mozy, but I imagine it’s similarly painless to get up and running. Backups are automatically “off-site”: A backup that is sitting on an external hard drive right next to your computer is great, but might not protect against flood damage, a power surge, or other disasters in that single location. These services exist “in the cloud” so to speak, helping mitigate those concerns. Granted, this kind of backup scheme requires some trust in the 3rd party to protect your data from both malicious people and disastrous events. This truly is a bit of a double edged sword, but I sleep well at night knowing that my data is being backed up and secured by a company made up of engineers that focus on the business of doing backups right. Backups are “brainless”: What I like most about services like these is that they work “automagically” in the background, watching for files to be updated and automatically backing up those changes. There’s no need to remember to plug in that external drive and copy your data over. Since starting to recommend these services to my friends and family I find myself wearing my “data recovery” hat far less often. The only way backups are effective for your standard computer user is if they’re completely automatic. Backups need to be brainless, or they just won’t work.

    Read the article

  • On Windows XP, Start > Run "My Documents" sometimes doesn't work

    - by Clayton Hughes
    On all of my home computers, I can enter "my documents" into the Start Run prompt and the My Documents folder of the current profile will open up. What's more, I can continue typing subfolders, files, etc. and auto-complete works and it's smart and enjoyable. I can't check at the moment, but I'm almost positive entries like "My Pictures" and "My Music" also go to their correct folders. On my work computers, if I enter "my documents" into the Start Run prompt, I get the following error: "Windows cannot find 'my'. Make sure you typed the name correctly, and then try again. To search for a file, click the Start Button, and then click Search." I can sort of circumvent this by creating a shortcut in my PATH named 'my' that points to My Documents folder, but this doesn't solve the auto-complete option (and it's otherwise imperfect, of course, because "my pictures" or "my music" all direct to the same place. A google search doesn't provide much help on this, although it does identify a poster in 2007 with this same question at another board: http://www.msfn.org/board/lofiversion/index.php/t124813.html (Login required, but Google cache available here: http://preview.tinyurl.com/ygxhwwl) Is this just a limitation of the networks belonging to a domain, or is there some way I can get this functionality back? My documents folder does live in the standard place (C:\Documents and Settings{username}\My Documents), and not on a network drive or anything. It's probably worth adding that the computers are part of some freakish Novell domain thing, too. I'm not in IT here so I'm not too up on the details. Thanks for any help/suggestions!

    Read the article

  • Advice needed for a home network setup (hardware & software) to handle many clients and potentially heavy traffic

    - by posdef
    I have recently decided to re-structure the home network of our flatshare here. Here's a quick outline of the situation. I envision to have the following 4 devices connected to the router via cable: Xbox 360 IP phone Printer QNAP server (Web, File and Multimedia) We are three people living here, so on top of that there will be to 5-6 computers/mobile devices connecting as wireless clients. My goal is to be able to transfer files (when needed) between the computer and the Multimedia server, which I can reach via 360 and play on the TV. I also would like to keep a high level of security; right now I have the encryption on WPA2 and MAC filtering. I don't believe the web server will get heavy traffic, though I would like to have it responsive. Likewise, I don't have a habit of downloading via torrent etc, but I greatly appreciate my network being responsive and fast, especially when I am browsing or streaming high quality media. Now my questions are: is this setup feasible? smart? efficient? can this be improved somehow? my current router (D-Link DI624) and the previous one (DI-524) used to have spontaneous drops in network, which I find highly irritating. I don't believe in my router, especially now that it completely crashed when I was test-running the setup by transferring a large media file to server while xbox was playing music from the server, and two computers browsing the net. Do I need to get new hardware, if so, any recommendations for a reliable and fast router?

    Read the article

  • How to edit files directly on webdav in windows.

    - by phazei
    I have a webDAV setup with the cPanel webdisk. I can connect to it through NetHood and I can drag and drop files to/from there. What I can't do is simply edit any of the files directly. I need to copy it somewhere else, edit it, then copy it back. That's essentially what is needed with ftp, though smart clients will monitor the file, making it easier than webDAV in the current state I'm using it in. I was under the impression that webdav was supposed to let me work on the files as if it were a local drive. But nothing can actually open the files. How can I go about bringing more functionality around to it? Or is this as good as it gets? I have tried 'net use q:\ https://myserver.com:2078' and 'net use q:\ '\myserver.com@SSL:2078\' but neither work and only throws: "System error 67 has occurred. The network name cannot be found." I ultimately want to use TortiseSVN with the webDAV so I can have my working copy running on the server.

    Read the article

  • How can we improve overall Programmer Education & Training?

    - by crosenblum
    Last week, I was just viewing this amazing interview by Kevin Rose of Phillip Rosedale, of Second Life. And they had an amazing discussion about how to find, hire and identify good programmer's, and how hard it is to find good ones. Which has lead me to really think about the way we programmer's learn, are taught. For a majority of us, myself included, we are self-taught. Which is great about being a programmer, anyone can learn and develop skills. But this also means, that there is no real standards of what a good programmer is/are, and what kind of environment's encourage the growth of programming skills. This isn't so much a question, but just a desire in me, to see how we can change the culture of programming, and the manager's of programming, so that education and self-improvement is encouraged. There are a lot of avenue's for continued education, youtube videos, books, conferences, but because of the experiental nature of what we do, it isn't always clear what's important to learn and to master. Let's look at the The Joel 12 Steps. The Joel Test Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing? I think all of these have important value, but because of something I call the Experiential Gap, if a programmer or manager has never experienced any of the negative consequences for not having done items on the list, they will never see the need to do any of them. The Experiental Gap, is my basic theory, that each of us has different jobs and different experiences. So for some of us, that have always worked with dozens of programmer's, source control is a must have. But for people who have always been the only programmer, they can not imagine the need for source control. And it's because of this major flaw in how we learn, that we evaluate people by what best practices they do or not do, and the reason for either can start a flame war. We always evaluate people in our field by what they do, and think "Oh if this guy/gal isn't doing xyz best practice, he/she can't be a good programmer, so let's not waste time or energy talking to them." This is exactly why we have so many programming flame wars, that it becomes, because of the Experiental Gap, we can't imagine people not having made the decisions that we have had to made. So this has lead me to think, that we totally need to rethink how we train, educate and manage programmer's. For example, what percentage of you have had encouragement by your manager's to go to conferences, and even have them pay for it? For me, and a lot of people, this is extremely rare, a lot of us would love to go to conferences, to learn more, but the money ain't there to do that. So the point of this question is really to spark a lot of how can we train, learn and manage better? How can we create a new culture of learning that doesn't insult people for not having the same job experiences. Yes we all have jobs and work to do, but our ability to do our jobs well, depends on our desire, interest and support in improving our mastery of our skills. Right now, I see our culture being rather disorganized, we support the elite, but those tons of us that want to get better, just don't have enough support to learn and improve ourselves. I mean, do we as an industry, want to be perceived as just replaceable cogs? Thank you...

    Read the article

  • Why is /usr/bin/env permission denied to rails server?

    - by Eric Hopkins
    I've just set up rails on an apache server running on Ubuntu, and when I try to go to the root page it gives this error: /usr/bin/env: bash: Permission denied env and all the directories in the path all have permissions 755. I tried setting env to have permissions 777 but still got the same error. Rails is running as "nobody". Why is this happening? I don't know what else to try. In /etc/apache2/sites-available/api.conf: <VirtualHost *:80> ServerName api.thinknation.ca ServerAlias api.thinknation.ca DocumentRoot /var/www/api/public ErrorLog /var/www/logs/error.log CustomLog /var/www/logs/access.log combined RailsSpawnMethod smart <Directory /var/www/api/public> # This relaxes Apache security settings. AllowOverride all # MultiViews must be turned off. Options -MultiViews -Indexes # Uncomment this if you're on Apache >= 2.4: Order allow,deny Allow from all #Require all granted </Directory> </VirtualHost> From config/database.yml in my rails directory (with sensitive user names and passwords omitted): default: &default adapter: mysql2 encoding: utf8 pool: 5 username: root password: socket: /var/run/mysqld/mysqld.sock development: <<: *default database: api_development test: <<: *default database: api_test production: <<: *default url: <%= ENV['DATABASE_URL'] %> database: api username: ------------ password: ------------ Not sure what other details or files are relevant, I will add them if needed.

    Read the article

  • Unable to access stackexchange sites from this system

    - by Sandeepan Nath
    Earlier, I was not able to access most of the stackexchange sites like stackoverflow, programmers.SE etc. on my home Windows XP system. I was able to access only a few like http://meta.stackexchange.com and not even http://www.meta.stackexchange.com (note the www). I tried many other sites like http://www.stackoverflow.com, http://area51.stackexchange.com/ but was getting page not found errors on all browsers. Even pinging from terminal was saying destination host unreachable. I did not check recently but may be all SE sites are unreachable now. I was clueless about what could be the issue. I thought some firewall issue? So, I stopped AVG antivirus's firewall, then completely uninstalled it and even turned of windows firewall. But still not reachable even after fresh installation of Windows 7. Then I noticed a "Too many requests" notice on google. This page - http://www.google.co.in/sorry/?continue=http://www.google.co.in/# I don't know why this appeared but I guess somehow too many requests might have been sent to these sites and they blocked me. But in that case, SE would be smart enough to show a captcha like google. So, how to confirm the problem and fix it. Similar questions like these don't look solved yet - Unable to access certain websites Unable to Access Certain Websites I have lately started actively participating in lots of SE sites. There are new new questions popping up in my mind every time and I am not able to ask them. Please help! Thanks

    Read the article

  • BGP Multipath & return routes

    - by Dennis van der Stelt
    I'm probably a complete n00b concerning serverfault related questions, but our IT department makes a bold statement I wish to verify. I've searched the internet, but can find nothing related to my question, so I come here. We have Threat Management Gateway 2010 and we used to just route the request to IIS and it contained the ip address so we could see where it was coming from. But now they turned on "Requests apear to come the TMG server" so ip addresses aren't forwarded anymore. Every request has the ip of the TMG server. Now the idea behind this is that because of multipath bgp routes, the incoming request goes over RouteA, but the acknowledgement messages could return over RouteB. The claim is that because the request doesn't come from the first known source, our proxy, but instead from IIS, some smart routers at the visitor of our websites don't recognize the acknowledgement message and filter it out. In other words, the response never arrives. Again, this is the claim. But I cannot find ANY resources on the internet that support this claim. I do read about bgp multipath, but more in the case that there are alternative routes when the fastest route fails for some reason. So is the claim completely bogus or is there (some) truth to it? Can someone explain or point me to resources? Thanks in advance!

    Read the article

  • Router and Switch VLAN Configuration for Isolated Network

    - by Ben
    I haven't worked with VLANs much in the past and I was hoping if I could get a good explanation of what I need to setup for this to work. I have a Netgear WNR2000v2 router and a Netgear GS108T smart switch currently in my network. The fourth port on the router connects to port one on the switch. I would like to be able to isolated port 8 on the switch for use as a "guest port" when I bring home malware infested PCs for repair. I figured the VLAN capabilities of the GS108T would be able to do this for me, but I think I have a misunderstanding of how the VLAN actually works. Port 8 needs internet access but should not be able to communicate with the rest of the PCs on the home network. The subnet for the home network is 192.168.1.0/24 and I would like the guest PC to have A) 192.168.1.64 or B) 192.168.2.2. I am reading a lot of stuff about port trunking and VLAN membership, but I am confused as to which setup needs to be in place to make this work. Any help is greatly appreciated! Let me know if there is more information I need to provide. Definitely looking to learn something from this project. Thanks!

    Read the article

  • sendmail relay status

    - by Andy
    Hello all, I have a RHEL3 server with sendmail configured to relay mail to: # "Smart" relay host (may be null) DSmailrelay This relay server is an exchange server not administered by me. A few days ago its IP address was changed without my knowledge so I've updated the correct ip in /etc/hosts for the mail relay entry. Unfortunately no mail is currently going through and maillog reports: Oct 26 14:32:39 fsimag sendmail[12580]: n9Q3VxPA012580: from=root, size=3685, class=0, nrcpts=1, msgid=<~R.*.2009102614315955@*>, relay=root@localhost Oct 26 14:32:39 fsimag sendmail[12580]: n9Q3VxPA012580: to=wodwest@*.net, delay=00:00:40, mailer=esmtp, pri=33685, dsn=4.4.3, stat=queued Oct 26 14:36:09 fsimag sendmail[13670]: n9Q3ZTcf013670: from=root, size=5831, class=0, nrcpts=1, msgid=<~R.medicus.2009102614352914@*>, relay=root@localhost Oct 26 14:36:09 fsimag sendmail[13670]: n9Q3ZTcf013670: to=tsgastro@(.net, delay=00:00:40, mailer=esmtp, pri=35831, dsn=4.4.3, stat=queued Oct 26 14:36:50 fsimag sendmail[13882]: n9Q3aAxj013882: from=root, size=5830, class=0, nrcpts=1, msgid=<~C.medicus.2009102614361009@*>, relay=root@localhost Oct 26 14:36:50 fsimag sendmail[13882]: n9Q3aAxj013882: to=elmwood@*.net, delay=00:00:40, mailer=esmtp, pri=35830, dsn=4.4.3, stat=queued (With domains obscured) The mailq command shows nothing, and I've also tried connecting to this new mail server via telnet and manually sending and reports as being queued but not sent. The administrator of this machine has put it back to me saying he sees no problems, and I just want to cover everything before passing it back to him. Is there any other tests/logs/reasons for sendmail to only report it as "stat=queued" ? I've looked in previous logs and the relay is set to root@localhost in those but none were ever set to queued. Thanks for any help, Andy

    Read the article

  • Mac Security - Which one?

    - by Bob Rivers
    Hi, Recently I had my credit card cloned. A few hours after shopping at an online store (in which I trust and buy since 2006) I received a call from my bank asking if I recognize a $5,000 debt to a store(?!) called Church of Christ... I'm a Mac user (OS X 10.6.3). I always kept my system updated and I have firewall enabled (in my Mac and in my broadband router), but I decided to adopt some kind of protection. I don't want to rise passionate discussions. Real or not, snake oil or not, I need to have back my peace of mind... I read this and this posts. I selected two software that I think that could help me (both have more features other than just an antivirus). Does someone have feedback about Intego's VirusBarrier X6 or Trendmicro's Smart Surfing? Intego solutions seems to be better, but TrendMicro brand/name is stronger in corporate environment, so their solution should be good. Both solutions have 30 day free trial, but I would like to hear something from you. Any other solution that I should look? TIA, Bob

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >