Search Results

Search found 2200 results on 88 pages for 'human factors'.

Page 65/88 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Official end of support date for Internet Explorer 6 on Windows XP

    - by scunliffe
    If I read the docs on Windows Service Pack support policies, and the specific Internet Explorer lifecycle support page as well as the Wikipedia page I've deduced that: IE6 support ends/ended at: Windows 2000 Ended (date unknown) Windows XP SP0 (RTM) Ended Home: 30-Aug-2003 Pro: 30-Sep-2004 SP1 Ended Home: 11-Jul-2004 Pro: 11-Jul-2004 SP2 Home: 13-Jul-2010 Pro: 13-Jul-2010 SP3 (released: April 21, 2008) Home: ??? Pro: ??? What isn't clear is the Windows XP SP3 scenario. In "human" terms, when is the end of support for IE6 on Windows XP SP3? e.g. if there is never a Windows XP SP4... or heaven forbid, an SP4 is released. I realize this doesn't force people to upgrade etc. however I'm trying to get a "semi" official word on when IE6 moves into the "not supported" category. I'm not interested in philosophical answers e.g. "big enterprise won't upgrade but they will expect support into 2017" stuff... I just want the "clear answer" in terms of official Microsoft support.

    Read the article

  • Linux RFID reader HID Device not matching driver

    - by blietaer
    Hello, I got a RFID reader (GigaTek PCR330A-00) that is meant to be recognized under linux/windows as a (Human Interface Device) keyboard/USB. I hate to say this but it is working as a charm under Win7 but not "really" under Linux. Under Debian-like distros (x/k/Ubuntu, Debian,..), or Gentoo, or... I just can't have the device working at all: the device scan well (it has its USB 5V, so it is happy/beeping/blinking) something happened in the dmesg, but no immediate screen display of the RFID Tag code as expected (and seen under win7) Support is claiming it is ok under RHEL or SLED "enterprises" distros... and I must admit I saw it working under a RHEL4... I tried stealing the driver but did not succeed having my reader working... My question is thus double: 1./ How can I hack the kernel to add support to my device (simply register PID/VID?) ? 2./ What is different at all in a "enterprise" proprietary distro? how can I re-use it? Thank you for any hint/help. Cheers,

    Read the article

  • ADSL to T1, Is it worth it for us?

    - by Jack Hickerson
    The company I work for has roughly 45-55 simultaneous users (local and remote/VPN) logged in at a given time. We currently subscribe to an ADSL connection but we have been experiencing slower upload/download speeds as our number of users increase. So, I have a few questions with regards to upgrading our connection to a t1 line. I am aware that the number of channels on a t1 line are much greater then that of our current ADSL connection, but I have heard that the number of active users on a t1 line should be no greater than ~30 for optimal performance. I would think this statement is dependent on what each user was using the connection for and could change depending on this variable. That being said, I have tried to break down how the line would be used in our organization based on our major departments: Sales (~60% of total users) - Everyday surfing, email, research, occasional streaming media Marketing (~15% of total users) - Heavy reliance on uploading/downloading, streaming media, file sharing Other (~25% of total users) - email, rare use of any connection intensive activities. I have considered keeping the ADSL for our local users and dedicating the t1 to our remote users (or vice versa) but the cost is significantly higher then what we had hoped for. All factors being equal (# of users, frequency of downloads/uploads from our current activities) Would you suspect a significant performance increase in making the transition to a t1 line from our current ADSL line? What are your thoughts or recommendations?

    Read the article

  • Expanding to dual video cards

    - by Anthony Greco
    I know a lot of factors can go into play here, so I will list my current hardware and setup: MOBO: GIGABYTE GA-890FXA-UD5 [http://www.newegg.com/Product/Product.aspx?Item=N82E16813128441] Processor: AMD Phenom II X6 1090T Black Edition Thuban 3.2GHz [http://www.newegg.com/Product/Product.aspx?Item=N82E16819103849] Ram: G.SKILL Ripjaws Series 16GB (4 x 4GB) [https://secure.newegg.com/NewMyAccount/OrderHistory.aspx?RandomID=4933910872745320111128011418] Current video card: EVGA 01G-P3-1366-TR GeForce GTX 460 SE [http://www.newegg.com/Product/Product.aspx?Item=N82E16814130591] OS: Windows 7 Ultimate x64 Currently I can run 2 monitors just fine in my setup. However, I want to upgrade this to 4 monitors. My question is, what is the best way to do this? I remember in the past reading I need the same type of video card, however would any GeForce GTX work, or do i need that very specific model (EVGA 01G-P3-1366-TR GeForce GTX 460 SE)? Are there any issues I should be aware of before I order 2 new monitors and a video card? Are there video cards better setup for this? I know NVidia offers SLI, however I do not know if my mobo is compliant. My mobo also offers CrossFireX configuration, though from what it says only Radeon are compliant. Any suggestions / feedbacks on my best route with my current setup is appreciated. Even if you suggest buying 2 new identical video cards, as long as you can mention which and why that is better I really appreciate it. Note: I really do not do any gaming. I sometimes do some 3D work in Unity and very rarely in Maya. Besides that I mostly do all my computer work in Visual Studios and Photoshop. I however need the 2 extra monitors because I monitor sometimes 5 remote desktops at once and switching on only 2 is becoming a very big pain. Also seeing 3 side by side while I work on the 4th will be very helpful. Again, I appreciate any feedback, as I have googled a bunch and just want to make sure what I buy will work.

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • Being a more attractive job candidate - Certs XOR Degree

    - by Zephyr Pellerin
    I'm currently working in an IT position, where I do helpdesk stuff, and predominantly security related issues/consulting (In the loosest sense of the term) In-House and for Service-Contract clients (as the only/acting CCSP [I guess I should say only person with Cisco experience] in my organization). I've professionally written Kernel Mode drivers for a gaming company. Among other things that I'm proud to put on a resume. I think of myself as very reasonably qualified as a System Administrator, With excellent Cisco experience, among other things I think would make a good addition to almost any IT staff in need of a new employee. However, Something has always tripped me up - Human Resources. Let me explain, I decided to skip the university route - I'm immensely glad that I did, The computer science graduates that I've met and work with rarely know much of anything about Computers (Until they gain some 'real' experience), Even when asked about Theoretical Computing fundamentals they can rattle something off about Turing Completeness but rarely do they understand the mathematical underpinnings. In short, I think instead of going to college, I'd rather pick up some real world experience. However, Apparently, Employers rarely think the same way. A quick perusal of jobs through the standard job search engine yields nothing short of a conspiracy to exclude anyone without 'A Bachelors Degree in Computer Science or Equivalent'. Interviews I've had in the past have almost always been entangled with - 1. My Age (Which I can't really change) and 2. Lack of Degree. Employers frequently disregard the CCNA/CCSP, The experience I've gained through internships, My extensive experience in x86 assembly and C, among so many other things I like to think are valuable to employers - In lieu of the fact that I don't have a piece of paper. So, AS AN EMPLOYER - Is it even worth working on my CCIE? Or should I pad my resume with certifications that are easier to acquire (Like CISSP, MSCE, Network+, etc.). Or should I ditch the whole idea and head back to get a Mathematics or CS degree?

    Read the article

  • Scriptable BitTorrent clients?

    - by James McMahon
    In an effort further automate all the little computer house keeping tasks that can waste my time I am looking into BitTorrent clients that have the ability to script common tasks. I've done some Googling and it looks like Transmission might have some of said such capabilities, but there site wasn't very clear on the details. Things I am looking to do; Prioritize and label torrents based on trackers Set seed length based on trackers and filesize Set additional seed time when a torrent's seed time expires based on a number of factors, like time spent seeding, remaining disk space and ratio. Move torrents to appropriate places post seeding based on labels and tracker Basically, while I could Python or Bash script things like moving torrents around and other simple actions, I need away to talk to the client to figure out things like the torrent seed time, tracker, labels, filesize, etc. Is there any client out there that would allow me to all or a subset these actions? I have access to Linux, Mac and Windows and am not tied to any particular torrent client. I am a programmer so I have no problems writing scripts, but examples of torrent scripting would also be helpful.

    Read the article

  • MSSQL error: consistency-based I/O error - can it be caused by an MSSQL or OS problem?

    - by Philipp Keller
    This is what I saw in the windows error log: SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x19fedd20; actual: 0x19fed5e3). It occurred during a read of page (1:1764) in database ID 6 at offset 0x00000000dc8000 in file 'D:\mssql\local_repository_pbdiffimport.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. I ran dbcc checkdb which told me I should restore with option REPAIR_ALLOW_DATA_LOSS, so I eventually ran DBCC CHECKDB (my_db_name, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS But that resulted in about 2'000 rows being lost. I restored a backup but now I'm afraid this will happen again since we already had a consistency problem in the same database about 2 weeks ago but then it happened in an index (recreated indexes solved the problem). We have investigated the discs - RAID5 looks good, no errors, and also none of the disc-check-utilities have revealed any hardware problem. Can this be caused by OS (Windows Server 2003) or by MSSQL (MSSQL Server 2005)?

    Read the article

  • What are the biggest, best CPUs that support Physical Address Extension?

    - by Giffyguy
    I'm looking for a CPU that will support PAE and fit into an LGA775 socket. This combination of technology is very much preferred for my current server hardware/software setup. My priorities in order of highest to lowest: PAE & LGA775 At least 1066Mhz FSB Largest CPU cache possible Multiple Cores if possible HyperThreading if possible Most other factors are of little-to-no consequence. I'm finding it very difficult to figure out what my options are. Intel doesn't have much useful information on PAE (since x64 is so dominant), and Wikipedia simply says that "PAE is provided by Intel Pentium Pro (and above) CPUs - including all later Pentium-series processors except the 400 MHz bus versions of the Pentium M." All of Intel's listed Pentium CPU's support Intel64, which makes me seriously doubt they will support PAE with a 32-bit OS. And Wikipedia's claim is so vague, I have no idea if they mean up-to-and-including the x64 Prescott CPUs. PAE is supposed to be an aspect of the x86 architecture, and I believe it is no longer supported in an x64 environment. Please correct me if I am wrong.

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Use of backreferences in fail2ban filters possible?

    - by Izzy
    From time to time, I see collections of suspect "File not found" errors in my Apache logs, basically using the pattern File does not exist: /var/www/file, referer: http://my.server.com/file In human terms: The file was not found, though it referenced here itself. A clear hacking attempt, as that's hardly possible (and the REQUEST_URIs often enough suggest the same). In my eyes a clear case for fail2ban – if I could get backreferences to work here: failregex = ^%(_apache_error_client)s File does not exist: /var/www(.+), referer: http://.+\1$ (Justin Case: above examples assume the DIRECTORY_ROOT of that webserver being /var/www) I googled for hours, searched the fail2ban wiki up and down – but nowhere I could find a statement concerning backreferences in its filters. Are they not supported, or did I do it the wrong way? Any hints how to make it work (except from "dirty hacks" like first sending the request to another fake url using mod-rewrite, and then catching on that (if anyone is interested, I can elaborate on that approach in an answer), or doing something similar using mod-security)? as an entire log line was requested: [Fri Nov 08 14:57:28 2013] [error] [client 50.67.234.213] File does not exist: /var/www/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found;, referer: http://www.myserver.com/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found; (sorry, logs were just switched, so this long candidate was the only one left currently; minor adjustments were made for privacy reasons)

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

  • How do you make Google's interface always be in your chosen language?

    - by Michael Wolf
    Google's interface and search results don't always appear in my preferred language, English. I'm located in Mexico City and, although I generally have no problem with Spanish, I would prefer search results in English most of the time. (The exception is when I'm using search terms in Spanish.) I'd also prefer the interface to be in English, but that's far less important to me than search results. Google looks at your IP to decide where you're coming from and thus what language to present results in. So, when I type www.google.com into the URL bar, it redirects me to www.google.com.mx. Is there a way to force Google to use one language all the time? Here are some things I've done and tried: 0) I have a Google account, and I've configured it such that it should know that English is my preferred language. I don't often explicitly log out of Google, so generally Google knows I'm me and my preferences when I access its services. 1) I've configured my browser to ask for pages in English. Very few sites support this feature at all; Google isn't one of them. 2) From www.google.com.mx, I can click on "Google.com in English". This works until, I think, I close the browser. 2a) From www.google.com.mx, I can go into account configuration, which is English. From then on, everything's in English. 3) I can append &hl=en (Human Language = English) to the end of the URLs of result pages. 2, 2a, and 3 all "work", but they're all mildly annoying. I'd rather avoid them if I could. (At the risk of stating the obvious, English and Spanish are the languages I'm dealing with, but I imagine that, say, a francophone using Google from Korea would run into basically the same issue.)

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • Best console based text editor not only for programmers [closed]

    - by robo
    I need console based text editor for writing both source codes and human readable texts such as emails. I need it to be user friendly. It mean for me: You can use it the same way as the notepad or gedit. You can use mouse there. If you need your mother of girlfriend or somebody to edit your text they will know what to do, they will not realize it is a console and will have only a feeling it is something like a notepad. copy, paste, undo works as usual with usual key combinations (Ctrl-C, Ctrl-V, Ctrl-Z). shift and arrows works as usual. They select the text. And when I return to the computer I want to use the text editor for programming. I expect: Syntax highliting auto indenting replacing spaces with tabs keyboard shortcuts for compiling possibility to configure it to use a debugger autocompletions for c#, java, c++ and other languages other things I expect from IDE's. I was working and configuring vim for a few years. But It never fulfilled all of my expectations (but it almost did). I thing I could get vim configured perfectly if I had few more weeks time for configurating it. Unfortunately I cannot afford to be configuring vim forever. Is there other alternative? Hopefully some editor I once set up and it will works forever? What do you use? I often hear people are using emacs. Is it worth learning?

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >