Search Results

Search found 780 results on 32 pages for 'hell'.

Page 20/32 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Misadventures at Radio Shack

    - by Chris Williams
    While I'm waiting for my Arduino kits to show up, I started reading the Getting Started With Arduino book from O'Reilly (review coming later) and I'm about 40 pages in when I get to a parts list for one of the first projects. Looks pretty straightforward, and even has Radio Shack part numbers next to almost everything. So on my lunch today, I decided to run out to "The Shack" (seriously, that's their rebranding?) to pick up some basics, like a couple resistors, a breadboard, a momentary switch and a pack of pre-cut jumper wires. I found the resistors without any difficulty, and while they didn't have the exact switch I wanted, it was easy enough to find one that would do. That's where my good luck abruptly ended. I was surprised that I couldn't find a breadboard or any jumper wires, so while I was looking around, a guy came up and asked me if I needed some help. I told him I did, explained what I needed and even gave him the Radio Shack part number for the pack of jumper wires. After a couple minutes he says he can't find anything in the system, which was unfortunate but not the end of the world.  So then I asked him about the breadboard, and he pointed me to some blank circuitboards (which are not the same thing) and I said (nicely) that those weren't breadboards and attempted to explain (again) what I needed, at which point he says to me "I don't even know what the hell you're looking for!" I stood there for a moment and tried to process his words. About that time, another salesperson came up and asked what I was trying to find. I told her I needed a breadboard, and she pointed to the blank circuit boards and said "they're right in front of you..." After seeing the look on my face, she thought for a minute and said... "OH! you mean those white things. We don't have those anymore." I thanked her, set everything down on the counter and left. (I wasn't going to buy only half the stuff I needed.. and I was pretty sure I was never going to be buying ANYTHING at that particular location ever again.) Guess I'll be ordering more stuff online at this point. It's a shame really, because I used to LOVE going to Radio Shack as a kid, and looking through all the cool electronics components and stuff, even if I didnt understand what most of them were at the time. Seems like the only thing they carry in any quantity/variety now is cell phones and random stereo connectors.

    Read the article

  • Why should I not do a masters degree

    - by aurel
    I have left university on July 2010 where I studied web design (as we all know you learn more by your self but that’s not the issue at the moment). Since then I have not managed to find a job (apart from a one month work experience), from the way things are going, and by taking into account the fact that all my university friends are in the same situation, I don’t think that I am going to find a job soon (within the industry) Now as we all do, even though I don’t have a job, I am still working on personal projects and try to keep up to date (I don’t need a job or uni to do this) – but I am thinking, because there is not work available, would it be worth going back to uni for a master degree? I know I don’t need it and I know that is unlikely that I will learn anything important, as I believe in self learning, and in most cases it is a lot more effective (but I have to say I don’t mind going back to school) The only reason I am thinking of doing the master is, (and this is where I need your help): If it takes me a year to get a job, then on the interview, would the employer think “what the hell did this guy do since he left university” – now if I go to university that would solve this problem. Or I’m I making up a problem that does not exist Plus, I know that employers need examples of sites that I have been working on, at the moment I only have 3 (as when working on personal projects, where their is not time limit I tend drag things in order to get them perfect, and they never get perfect) – so by going back to uni, then this problem maybe solved I said all this as I have read a lot about the fact that you don’t need to have a masters degree to work on web design market (and I totally agree) but considering my concern, the question is should I do a masters course to avoid just spending hours in my room working and learning in my own (but that it would be hard to convince employers that I was really learning in my room) Maybe because I’m still young age 22 not that old anyway :), but I don’t have the “dream” of being rich, so if I were to tell the truth I don’t really care of the fact that I don’t have a job (at the moment), because regardless, I am working on what I love every day, but I know that in the future when I will need the job I may find it harder to get one, if I neglect doing so now Every time I ask a question that I’m not sure about, I keep going on and on, but I really hope you get what I am trying to get across. By the way the course that I am looking at for a masters says that it would teach me how to do these: e-commerce e-government e-science e-learning I don’t know any of them, a part from e-commerce Thanks

    Read the article

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

  • All hail the Excel Queen

    - by Tim Dexter
    An excellent question this past week from dear ol Blighty; actually from Brian at Nextgen Clearing Ltd in the big smoke (London). Brian was developing an excel template and wanted to be able to reference the data fields multiple times inside the Excel template. Damn good question and I of course has some wacky solutions, from macros and cell referencing in Excel to pre-processing the data with an XSL stylesheet to copy the data multiple times so it could be referenced multiple times. All completely outlandish, enter our Queen of Excel, Shirley from the development team. Shirley is singlehandedly responsible for the Excel templates, I put her through six months of hell a few years back, with a host of Excel template requirements. She was more than up to the challenge and has developed some great features. One of those, is the ability to use the hidden XDO_METADATA sheet to map the data to custom named fields so they can be used multiple times in the template. So simple and very neat! Excel template and regular Excel users will know that you can only use the naming function once ie the names have to be unique across the workbook so you can not reuse a cell/group name. To get around this you can just come up with as many cell names as you want and map them in the XDO_METADATA sheet to the data columns/fields in your XML data set:. For example: XDO_?DEPTNO_SUMMARY?  <?DEPTNO?> XDO_?DNAME_SUMMARY?  <?DNAME?> XDO_GROUP_?G_D_DETAIL? <xsl:for-each-group select=".//G_D" group-by="./DEPTNO"> XDO_?DEPTNO_DETAIL? <?DEPTNO?> As you can see DEPTNO has been referenced twice and mapped to different named values in the left hand column. These values can then be used to name individual cells in the Excel template. You'll also notice a mix of Publisher <? ...?> and native XSL commands. So the world is your oyster on the mapping and the complexity you might need for calculations or string manipulation. Shirley has kindly built out a sample Excel template, data and result here so you can see how it all hangs together. the XDO_METADATA sheet is hidden, just right click on the sheet names and use the Unhide command to show it.

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • Developer career feeling like going back in time every new job [closed]

    - by komediant
    Is there a good category for this question? My background is bachelor in ICT and for a hobby I am programming already since I was around twelve I think. Started with QBasic, Pascal, C, Java et cetera. Currently I am working for about eight/nine years. Half academics/medical and half company world. A few years ago I started with frameworks and I began with Grails (underlying Spring/Hibernate), which was a heavenly job, very productive and no hassle. My previous job I developed in pure Spring/Hibernate Java, which was a bit more writing annotations and XML and no conventions like Grails. But still, I did like Spring/Hibernate a lot and the professional setup with a developmentstreet, versioning, Jenkins/Sonar, log4j and a good IDE like IntellIJ. It felt quite 'clear' and organised, although I knew Grails which felt a bit more productive. But...at my current job almost half the code is pure servlet, hard coded JDBC (connections handled by yourself), scriptlets in all JSP pages, no service layer, no versioning, no Maven, HTML in DAO-layer, JAR-hell, no hot swap deployment locally, every change you have to deploy and hope it works fine on the server. All local development needs ugly scriptlet tags to check which environment it is running. Et cetera. Now and then developers work over in the evening - I don't - and still lots of issues are not solved and new projects are waiting. I hear the developers complaining, but somehow they feel like what they have now is "advanced" or they are in a sort of comfore zone. The lead developer seems open for new things, but half of the times he says he can implement MVC-framework features himself instead of using what is already out there. So in short, I currently feel like I miss all the modern framework techniques and that the company is going so slow forward. I just work here for two months now. What I do now is also code some partially ugly stuff, but it goes in completely into my nature and I feel uncomfortable with it. Coding something takes long(er) than estimated and my manager complains about why it takes so long and I feel ashamed for myself needing so much time. Where I was used to just writing a query I now build up whole try catch methods. My manager knows my complaints and the developers do so too. There will come a meeting to line out plans for 2013 on technology and the issues I and the company are facing. I am not looking for another job yet, it's close to wehre I live and the economy is fragile. Does anyone else have had this kind of career, like feeling going backwards witch technology? And how did you cope with it?

    Read the article

  • Task Scheduler not running .bat or .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • Task Scheduler not able to execute .vbs scripts successfully

    - by Django Reinhardt
    Apologies if this has a really obvious answer! We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs scripts stopped successfully executing (always timing out)... but could still be executed manually with no problems(!). Not knowing any good reason why the Task Scheduler should start having problems, we thought we'd try a little "creative thinking", and run the .vbs another way: Via a .bat file executed by the Task Scheduler. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file run by Task Scheduler is nothing more than... CScript "C:\location\script.vbs" > Log.txt But after an attempt to run it, the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The Log.txt (as output from the .bat file above) says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • How do I get more information on a potential network freeloader?

    - by Dov
    I have a home network set up, complete with a relatively good password. I'm in Mac OS X 10.6 (Snow Leopard) and have been noticing, on occasion, a computer showing up in my Finder's Shared section, that is not one of my own (the "pe-xpjalle" box pictured below). He has a tendency to come and go. How can I figure out his MAC address or something, so I can block him? I checked my "Logs and Statistics" in the Airport Utility, and didn't see that computer under DHCP clients. I'd rather not change my password, since I have quite a few devices I'd have to update. Is there any other reason he's show up on my network besides having guessed my password? Update: I fixed the Dropbox URL above (how embarrassing, I'm new to Dropbox. Thanks for the heads up, Doug.) Update 2: I tried clicking on "Connect as..." just for the hell of it, and got the dialog below. Now I have even less an idea what's going on than before. I don't have Parallels of VMware running, just the following: Transmission, NetNewsWire, Mail, Things, Safari, iTunes, Photoshop, Pages, Yojimbo, Preferences, AppleScript Editor, Software Update, Airport Utility, and Terminal. I don't think any of those create a virtual network machine, right? And no VMware machine of mine has ever had a name resembling "pe-xpjalle". Update 3: I just changed my passwords on both my N- and G-only networks, and I'm still seeing this, so I highly doubt that it's someone who's figured out my password (twice now). I'm really stumped.

    Read the article

  • Task Scheduler not able to execute .vbs successfully

    - by Django Reinhardt
    Hi there, got this weird problem, which will hopefully have an obvious solution for some enlightened soul: We have several daily tasks we run via a .vbs script on our server (through the Task Scheduler), and for months it has been fine, but recently we've hit a problem. The .vbs script stopped successfully executing... but oddly it worked fine when ran manually! The error given in these circumstances was always "Timeout". We thought we try a little creative thinking, and run the .vbs another way: Via a .bat file. Again we hit weird issues, but with a little more debugging information, this time around. The .bat file is nothing more than... CScript "C:\location\script.vbs" > Log.txt But the Task Scheduler fails with the following error: 0x1: An incorrect function was called or an unknown function was called. The log.txt file says: CScript Error: Initialization of the Windows Script Host failed. (Not enough storage is available to process this command. ) But get this: The .bat file executes perfectly (vbs script and all) if it's executed with a double click! There's only a problem when it's run by Task Scheduler. What the hell? We're running Windows Server 2008 R2 (x64) and yes, the Task Sheduler's results are the same whether the user is logged in or not. Also, the user that can run the scripts successfully manually, is also the same user that runs the scripts in Task Scheduler. Thanks for any help for this weird problem!

    Read the article

  • Drobo FS vs. MacBook Pro: Finder works, Drobo Dashboard doesn't

    - by dash-tom-bang
    Does anyone have any experience with the new Drobo FS, specifically using it from a MacBook Pro? My experience thus far is this: Set up the Drobo Dashboard software (hereinafter called simply 'Dashboard') on my WinXP machine, which is hard-wired to the network to do the data migration from my NAS-that's-being-replaced (a 250G SimpleShare which works well enough but I was always afraid of losing the one disk). The Dashboard seems to work ok, except that the DroboCopy function doesn't work at all. This is the backup solution, which I can configure, and if I launch it (e.g. to back up from the old NAS to the Drobo) it spins the NAS, seeking the drive all over hell and creation, until finally giving up an hour+ later with zero files copied. Selecting only a subset of the data yields the same effect albeit more quickly. On my Mac I installed the Dashboard software too, since most of my fiddling with the device will be from my couch in the living room. Finder connects to the box just fine, fwiw, but Dashboard just sits there, "waiting for connection." This is considerably more bothersome than the above paragraph, but I figured I'd give whatever information I have. Drobo is insisting that I send them this "Debug Log" file that their software generates. Does anyone know what's in it? It's encrypted and they won't tell me, which spooks me just a bit; not like I'm terribly concerned about privacy but I don't want to be sending personal information out to every clown who says they "need" it in order to help me. thanks a ton, -tom!

    Read the article

  • IIS8 behind a VPN + Windows Server 2012 - how to properly bind IP+Port

    - by ryugen
    This is my first question so I hope I'm going to give you enough information. I'm running Windows Server 2012 within the Hyper-V environment of my Windows 8 machine. Within Windows Server 2012 I'm running a VPN tool based on openVPN to hide my real IP. When I run IIS8 with the VPN disconnected it works flawlessly through the Internet (port 80 forwarded correctly). But as soon as I connect to the VPN I can't reach my site through the domain anymore. Now I tried basically everything I know which is why I'm asking you guys. I tried binding IIS8 to the IP of my virtual ethernet card. I tried changing the priority of the NIC through the "Network and sharing center" via the advanced tab. I used ipconfig /flushdns in case there was something wrong in the DNS handling. Hell, I even turned off the Windows firewall. I also used a port scanner to verify the problem. The webserver is reachable on port 80 with VPN disconnected and immediately gets unreachable on connect. Theoretically both IPs (my regular one AND the VPN) should be reachable or at least not impair the other one right? Do you have any other suggestion? Do I have to route something somewhere somehow?

    Read the article

  • Network Traffic Log

    - by Chris Becke
    Background - On my "home" network I have a Linksys WTR45GL router providing my internet access as well as a wireless AP. Connected I have * 2 Windows PCs (wired) * At least one laptop (Wired) * Some 802.11 enabled handheld consoles (PSPs) * A Nintendo Wii * Some windows XP pcs used by the people in the granny flat. Where I live, South Africa, well, 1Gb worth of monthly cap is, while not expensive, costly enough that I'd like to be sure that all the bandwidth used by devices on my network is ... well ... legitimate and not the result of neighbors parasiting my wireless, malware or just the result of "liberal" download policies in my software. I got the Linksys WRT45GL on the understanding that there were custom firmwares (DD-WRT and Tomato) that allowed bandwidth tracking, but there doesn't seem to be any facility to get a log of traffic that can be examined to see (a) which local devices were the biggest consumers of bandwidth and (b) what they were connected to. What tools are there for logging traffic such that, when it gets to that OMG moment in the month when all my bandwidth is gone, I have a chance to find out what the hell used it all up (and hopefully attempt some corrective action).

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • Question for Vim search and peck typists

    - by mike
    I'm trying to write a Vim tutorial and I'd like to start by dismissing a few misconceptions, as well as giving some recommendations. I don't know if I should dismiss touch-typing as a misconception, or include it as a recommended prerequisite. At the time I learned the editor, I had already been touch typing for a couple of years, so I have absolutely no idea what would be the experience of a two-fingered typist in Vim. Are you a vim two-fingered typist? what has your experience been like? EDIT: I'm not sure if my question was clear enough. Maybe it's my fault, I don't know. I get mixed replies and other questions (why do you write this? what does one have to do with the other?), instead of empirical info (I don't touch type and it's been (fine|hell)). Some programmers touch-type others search and peck. In the middle, there's Vim which requires a certain affinity with keys to do various operations. I am a touch typist and I have no clue what my experience would have been like with the editor if I wasn't. I can't honestly picture myself pecking some of these combos. But like I said, I don't know what it is like. Before telling someone to start using Vim, I'd like to know if I should dismiss touch-typing as a misconceived requirement. So, I'll rephrase the question, have you felt that not being a touch-typist has impeded on your experience with Vim?

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • Brick-level backup and restore with exchange 2007

    - by V. Romanov
    In the company I'm working for, we use exchange 2007 and backup it using netbackup. The backup is a daily complete backup of the information store and the direct corollary of this is that restores are hell. We need to restore the entire information store (over 80 gb), somehow merge it back with the original store, which causes problems. Alternatively, we tried using QUEST software to emulate exchange and restore mails from the emulation. However, this proved unreliable. The main problem with this entire situation is that we have to restore the whole information store and walk it through the restore process manually, and its quite absurd to be forced spend more than a day restoring even one erased email. (we have erased mail retention, but sometimes we need to restore older mail). in comparison, back in the day of XCH2003 and backupexec 12, we had complete brick level backup and restore at the push of a button. I've spoken to one of our chief sysadmins who claimed that the official response from microsoft to this issue was - "sorry guys, no brick level backup in XCH2007" which sounds ridiculous to me. Can someone shed some light on the situation? How do you backup your exchange2007 stores? Can you restore a single email quickly? A mailbox, perhaps?

    Read the article

  • How can I split 200Mbps of streaming traffic into routers?

    - by Jared
    As the title says, I have 200Mbps of streaming video traffic coming into my command center. How do I split the load between routers? Setup is like this: fiber --- router --- switch --- workstations I'm sorry I haven't dealt with this much traffic before. so please be gentle if you're going to kick me out :) EDITED FOR DETAILS: Okay, this specific project is for our company's IP CCTV system. We have deployed over 100++ cameras all over a building/campus and we have estimated each camera to take about 2Mbps of bandwidth each. Now, they're all connected to a switch and that's entirely fine. But coming into our command center, they have to be on a router since it'll get more than 200++ cameras next year (and I don't want to have too many hosts on one subnet). My plan was to have the 1st hundred on a 172.16.9.x block and the 2nd hundred on a 172.16.10.x block (all /24). The servers I have are currently sized to match (about 5 dual 6-core xeons) and I'd have about 19 workstations all streaming video from the 5 servers. (servers pull video from the cameras). But 200Mbps of constant traffic? How the hell do I even break this up? I need to have 1 gateway, to manage the routes... I honestly think I'm way in over my head.

    Read the article

  • 100% iowait + drive faults in dmesg

    - by w00t
    Hi, I have a server on which resides a fairly visited web app. It has a raid1 of 2 HDDs, 64MB Buffer, 7200 RPM. Today it started throwing out errors like: kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen kernel: ata2.00: cmd b0/d0:01:00:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in kernel: res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) kernel: ata2.00: status: { DRDY } kernel: ata2: hard resetting link kernel: ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) kernel: ata2.00: max_sectors limited to 256 for NCQ kernel: ata2.00: max_sectors limited to 256 for NCQ kernel: ata2.00: configured for UDMA/133 kernel: sd 1:0:0:0: timing out command, waited 7s kernel: ata2: EH complete kernel: SCSI device sda: 976773168 512-byte hdwr sectors (500108 MB) kernel: sda: Write Protect is off kernel: SCSI device sda: drive cache: write back All day long it has been in load higher than 10-15. I am monitoring it with atop and it gives some bizarre readings: DSK | sda | busy 100% | read 2 | write 208 | KiB/r 16 | KiB/w 32 | MBr/s 0.00 | MBw/s 0.65 | avq 86.17 | avio 47.6 ms | DSK | sdb | busy 1% | read 10 | write 117 | KiB/r 17 | KiB/w 5 | MBr/s 0.02 | MBw/s 0.07 | avq 4.86 | avio 1.04 ms | I frankly don't understand why only sda is taking all the hit. I do have one process that is constantly writing with 1-2megs but what the hell.. 100% iowait?

    Read the article

  • Exchange 2007 OWA returns blank page with url xxxxx&reason=0

    - by Dayton Brown
    Hi All: I've just run into an issue with my exchange OWA. It returns a blank page with the url string https://www.xxxxxxxx/&reason=0. Nothing in the logs gives me any good reasons. Here's what I've done so far; 1) reinstall Exchange roll-up 7. 2) recreate virtual directories. 3) reboot. (this was mostly a shot in the dark, but what the hell) Exchange via rpc/https is still working great. Anyone run into this before? EDIT Here is the last snippet from the OWASetupLog. doesn't look like anything blew up. [09:45:36] ******************************************* [09:45:36] * UpdateOwa.ps1: 5/27/2009 9:45:36 AM [09:45:40] Updating OWA on server HOMER [09:45:40] Finding OWA install path on the filesystem [09:45:40] Updating OWA to version 8.1.375.2 [09:45:40] Copying files from 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\Current' to 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\8.1.375.2' [09:45:41] Getting all Exchange 2007 OWA virtual directories [09:45:42] Found 1 OWA virtual directories. [09:45:42] Updating OWA virtual directories [09:45:42] Processing virtual directory with metabase path 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'. [09:45:42] Metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' exists. Removing it. [09:45:42] Creating metabase entry IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2. [09:45:42] Configuring metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2'. [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'

    Read the article

  • Some Can reach bidmail.com others can't.

    - by user69426
    On a windows 7 Professional machine in Chrome one of our Estimating assistants can't get to www.bidmail.com, however the other 3 can. On his machine I did nslookup then bidmail.com and it fails to find it. I then went to a machine that could reach bidmail and did nslookup. It can't find it. I was skeptical and thought maybe it was a cached page so I cleared the cache then went back to bidmail.com was able to get to the page, login, lookup a newly posted bid then download the file. Yet I can not look it up through nslookup and I can't ping it www.bidmail.com and I can't trace it. I remoted to our other warehouse which is set up as a workgroup and attempt to nslookup bidmail and that nslookup fail... and on that machine which has never been to bidmail before it was able to connect to the website! I am totally confused if I can't ping it and I can't use nslookup to get there how in the hell is Chrome getting to the page and how do I get this guy back on? Also while typing this I took a new laptop out of the box plugged it in with no updates and can get to bidmail! omg!

    Read the article

  • Overheating Toshiba Satellite L300

    - by ldigas
    A coleague of mine's having trouble with his new Toshiba Satellite L300 ... I tool a kook at it, and indeed it's hot as hell. I couldn't hold my hand on it for too long. He says it also has a tendency to turn itself off (WinXP 32bit running) with no forewarning. Hasn't happened to me while I was using it, but that wasn't long anyways. The first guess was it was too dirty ... problem is it's new, came out of a package a quarter of a year ago. Kept in a clean environment (office). Looks clean. No dust in sight. Second guess is that the fan wasn't working properly, cause indeed it has intervals of working, and non working. But when I listen to it, it sounds like normal usage. I took a SpeedFan measurement, and it reports temp. up to 85 Celsius ... which is definitely too high. Anyone knows what else I could do to it ? It is under warranty and it will go to the service, but I thought if there is something we can do, as to avoid carrying it there / be without it for a week ...

    Read the article

  • Install PHP mcrypt on Red Hat 4

    - by Chris
    I'm having a very hard time getting mcrypt for PHP installed on a Red Hat 4 server. I've downloaded the rpm but it tells me: error: Failed dependencies: php-common(x86-32) = 5.4.7-2.fc18 is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(FileDigests) <= 4.6.0-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 libc.so.6(GLIBC_2.4) is needed by php-mcrypt-5.4.7-2.fc18.i686 libltdl.so.7 is needed by php-mcrypt-5.4.7-2.fc18.i686 rtld(GNU_HASH) is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(PayloadIsXz) <= 5.2-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 So when I try to install one of those packages, they also require another 8 packages. So I'm diving into dependency hell here. Now if I try to compile mcrypt from source, this is what I get: checking for libmcrypt - version >= 2.5.0... no *** Could not run libmcrypt test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means LIBMCRYPT was incorrectly installed *** or that you have moved LIBMCRYPT since it was installed. In the latter case, you *** may want to edit the libmcrypt-config script: no configure: error: *** libmcrypt was not found But I was able to install libmcrypt from an rpm packages successfully. Any suggestions? Also, I cannot use up2date as it requires an active paid account from Red Hat and since the staff has changed rather rapidly in the last year where I work, no one knows if there even was any support accounts.

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • Mac Backup Plan

    - by Chuy77
    I'm reviewing my backup plan and would appreciate any thoughts about what more I should do (if anything) to make sure I'm properly covered in case of all hell breaking loose. :-) I have one machine. 1) I run a nightly clone with SuperDuper. I alternate the clone drive weekly so I have two clones, one never more than a week old. 2) I use BackBlaze as a sort of Time Machine in the cloud. It runs all the time and keeps everything on my machine backed up online. 3) I sync all my 1Password logins, etc. to my iPhone once a week. ...And that's it. I feel pretty covered. But I'm always reading stuff like this: http://www.43folders.com/2010/03/15/yes-another-backup-lecture And that doesn't even mention online backup, and seems like a huge pain in the behind. But maybe I'm being naive? Should I have more backups? Thanks for any feedback. I really appreciate it.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >