Search Results

Search found 2423 results on 97 pages for 'human readable'.

Page 70/97 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Being a more attractive job candidate - Certs XOR Degree

    - by Zephyr Pellerin
    I'm currently working in an IT position, where I do helpdesk stuff, and predominantly security related issues/consulting (In the loosest sense of the term) In-House and for Service-Contract clients (as the only/acting CCSP [I guess I should say only person with Cisco experience] in my organization). I've professionally written Kernel Mode drivers for a gaming company. Among other things that I'm proud to put on a resume. I think of myself as very reasonably qualified as a System Administrator, With excellent Cisco experience, among other things I think would make a good addition to almost any IT staff in need of a new employee. However, Something has always tripped me up - Human Resources. Let me explain, I decided to skip the university route - I'm immensely glad that I did, The computer science graduates that I've met and work with rarely know much of anything about Computers (Until they gain some 'real' experience), Even when asked about Theoretical Computing fundamentals they can rattle something off about Turing Completeness but rarely do they understand the mathematical underpinnings. In short, I think instead of going to college, I'd rather pick up some real world experience. However, Apparently, Employers rarely think the same way. A quick perusal of jobs through the standard job search engine yields nothing short of a conspiracy to exclude anyone without 'A Bachelors Degree in Computer Science or Equivalent'. Interviews I've had in the past have almost always been entangled with - 1. My Age (Which I can't really change) and 2. Lack of Degree. Employers frequently disregard the CCNA/CCSP, The experience I've gained through internships, My extensive experience in x86 assembly and C, among so many other things I like to think are valuable to employers - In lieu of the fact that I don't have a piece of paper. So, AS AN EMPLOYER - Is it even worth working on my CCIE? Or should I pad my resume with certifications that are easier to acquire (Like CISSP, MSCE, Network+, etc.). Or should I ditch the whole idea and head back to get a Mathematics or CS degree?

    Read the article

  • Grub hangs at "Starting up ..." when USB flash card reader is plugged in (on Ubuntu Hardy)

    - by Laurence Gonsalves
    I have a PC with Ubuntu Hardy installed. The machine boots fine unless my USB flash card reader (one of those N-in-1 readers by MediaGear) is plugged in at startup. If the reader is plugged in, the boot process proceeds as normal until it gets to the screen that says "Starting up ...". At that point it just hangs forever. To work around this I currently leave the reader unplugged when booting, and then plug it back in after I see that Ubuntu is actually starting. This is annoying though, especially when I reboot the machine (typically for updates), forget to unplug the reader, and walk away only to come back hours later to find the machine hung. My guess is that the presence of the reader is confusing Grub about where to find the kernel. The weird thing is that Grub is on the same drive as the kernel I want it to boot so clearly the drive is still readable even when the flash card reader is plugged in. Is there some way I can tell Grub to never go looking on the flash card reader?

    Read the article

  • How Can I Make Apache Stop Serving ALL Unknown File Types (like .php~)?

    - by user223304
    I am coming from IIS and moving to Apache and recently found out that Apache by default serves up files of an unknown file extension as PURE TEXT. This can be an issue if a user uses certain programs that back up .php files as .php~. Then the .php~ file becomes completely readable by simply navigating to it in a browser. To make matters worse these .php~ files are often considered 'hidden' in the linux environment from the user so some may not even know they exist. Bots have been created around this fact that scour the internet looking for popular file name backups and extracting potentially secure info from them. I already know how to stop serving up .php~ files or any specific file extensions. I also know not to use any editors that would save backup files like this. My question is, how can I stop this default Apache behavior of serving up ANY non-MIME file type at all? I just don't like the this behavior and would like to stop it. I don't want it serving up .aspx~, .html~, .bob, .carl, no extension or anything else that is not a real MIME type. I know that I can probably go and use a directive to first Deny access to all file types. Then add the ones I want to serve out one by one. But I'm wondering if there's an easier/quicker way. Thanks for any help.

    Read the article

  • knife on Windows inconsistently reads ~\.ssh\knife.rb on Management Workstation

    - by gWaldo
    I am implementing a new instance of (Open-source v10.12) Chef in an existing environment. Currently the environment is mostly Windows, but more Linux is being introduced. I have used Chef in a previous gig, however that was a *nix-only environment. Because this is a primarily-Windows environment, my main workstation is Windows 7 (x64), and I use Powershell as my main terminal. I created a ~\.chef directory, populated with a knife.rb and my client.pem file. When I run knife client list from ~, I get the expected results. I keep my work in Dropbox just in case my laptop should fail or be stolen. When I run knife client list from the repo directory (C:\Users\waldo\Dropbox_company\projects\chef`), I get ERROR: Your private key could not be loaded from C:/home/waldo/.chef/waldog.pem Check your configuration file and ensure that your private key is readable (Note that the path is incorrect) This is the progression as I walk up the tree towards my ~ running knife client list: C:\Users\waldo\Dropbox\_company\projects\ => Above error C:\Users\waldo\Dropbox\_company\ => Above error C:\Users\waldo\Dropbox\ => It works! (Expected results) C:\Users\waldo\ => Expected results C:\Users\waldo\Documents\ => Expected Results C:\Users\waldo\Documents\GitHub => Expected Results C:\Users\waldo\Documents\GitHub\aProject\ => Expected Results What. The. Eff! Now, I know that I can add -c path\to\knife.rb, but that's a HUGE PITA. Question is: Why is knife inconsistently reading my ~\.chef\knife.rb, and how can I get around that without incurring carpal tunnel?

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Radeon 6950 - Garbling of text and graphics in certain Windows only

    - by Greg
    This morning I noticed the text in Gmail (in Firefox 4) looked a little funny (kind of thin, maybe some color fringing). I went to work and thought it might be some ClearType issue or something with the way Direct way that FF4 draws to the screen. When I came back from work (I left the computer on), the problem was much worse - way beyond ClearType nit-picking. The text was barely readable. I opened Chrome and there was no such problem. It seems like only Windows that use hardware acceleration are garbled, and ones that use GDI are not. But, I fired up Dragon Age and didn't notice any problems (I only really looked at the main menu though). Here is a link to a screen shot that illustrates the problem. Notice how the Windows Live Mesh window is completely unreadable, the text in Firefox 4 (left) is pretty bad, while Chrome, the Windows Control Panel, and the task bar are perfectly fine. The fact that the problem shows up in screen shots and that it only happens in certain Windows makes me confident that the problem cannot be with the monitor or DVI cable. I am using the AMD Radeon drivers from 4/27/11. The card I have (MSI Frozr II) came with a slight overclock (810Mhz) out of the box, but it looks like when I'm on the Windows desktop it's not running at full clock (CCC reports 450Mhz). Still, I underclocked it to the stock reference clock (800Mhz) and it made no difference. The idle temperature according to Afterburner is 42-44 Celsius, which seems a tad high but not enough to cause a problem - it's cold to the touch if I open up the machine. What the heck could be causing this? The problem varies in intensity. As we speak I'm in Firefox and things look better than they did earlier - it'll probably get worse again soon. Radeon 6950 (MSI Frozr II), Seasonic X 560, Core i5 2500K at stock clockspeeds, 16GB RAM, Asus P8P67 M Pro

    Read the article

  • Ubuntu won't boot from USB memory stick

    - by mackenir
    I used the instructions on this webpage to create a bootable USB drive for running Ubuntu 9.10. Unfortunately it doesn't work on my EeePC. Even with 'Removable Dev.' selected in the BIOS as the first boot device, the PC just boots into Windows 7. How do I troubleshoot this problem? The drive is readable and looks like this: Directory of E:\ 28/10/2009 21:14 <DIR> .disk 28/10/2009 21:14 222 README.diskdefines 28/10/2009 21:14 143 autorun.inf 28/10/2009 21:14 <DIR> casper 28/10/2009 21:14 <DIR> dists 28/10/2009 21:14 <DIR> install 28/10/2009 21:14 <DIR> syslinux 28/10/2009 21:14 4,098 md5sum.txt 28/10/2009 21:14 <DIR> pics 28/10/2009 21:14 <DIR> pool 28/10/2009 21:14 <DIR> preseed 28/10/2009 21:14 0 ubuntu 26/10/2009 16:16 1,468,640 wubi.exe 25/02/2010 00:28 2,147,483,648 casper-rw 8 Dir(s) 5,290,307,584 bytes free

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • UACCEEventLog 301 Filling Event Logs

    - by rjt
    After pushing out clients for the MS Application Compatibility Toolkit on our domain via GPO, UACCEEventLog 301 occurs a few times per second in the event log. Several Thousand per hour. One test i need to do is logon with Administrator to see if these events go away while Admin, but of course that is not a fix. This is only part of the event log entry, but is the most readable and clearly indicates yet another problem with Antivirus software. But still no fix. Originally, i posted this In Words and Bytes, but then edited it to make it much easier to read. LocalMachine\Users do have Read Access to this key. For a test, i added "Domain Users" but there are many more events for other parts of the registry and for Administrators. <XML> <TYPE> UacceRegistryVirtualization </TYPE> <EXENAME>smcgui.exe</EXENAME> <EXEPATH>c:\program files\symantec\symantec endpoint protection </EXEPATH> <APINAME>RegOpenKeyA</APINAME> <REGKEYNAME> HKEY_LOCAL_MACHINE\SOFTWARE \Symantec\Symantec Endpoint Protection\AV\Storages \SymHeurProcessProtection\RealTimeScan\0 </REGKEYNAME> <RESTRICTEDBYACL>FALSE</RESTRICTEDBYACL> <DESIREDACCESS>MAXIMUM_ALLOWED</DESIREDACCESS> <REGVALUENAME></REGVALUENAME> <REGVALUETYPE>0x00000000</REGVALUETYPE> <REGVALUEDATA></REGVALUEDATA> <CURRENTGROUP>Users</CURRENTGROUP> </XML>

    Read the article

  • Weird File Corruption

    - by Viet Norm
    My Windows 8 broke few days ago and I had to reinstall it (see Can't boot Windows 8). Afterwards, I found some corrupt files on C drive. Ok, it happens, but this is really weird. Corrupt files seem to contain stuff from Windows registry. For example, this is beginning of one of the corrupt files: hbin ` PÿÿÿT h i s z o n e c o n t a i n s w e b s i t e s t h a t y o u t r u s t n o t t o d a m a g e y o u r c o m p u t e r o r y o u r f i l e s ... I googled and found that 'hbin' often refers to "hive bin" of Windows registry. Then I searched the registry for the readable part of corrupt data, and found the text in some registry value (not the text above, but something I found in another corrupt file. I'm assuming the above is also from registry). My question is, how could this happen? Was it a virus, or did Windows somehow corrupt these files while attempting to repair itself?

    Read the article

  • Use of backreferences in fail2ban filters possible?

    - by Izzy
    From time to time, I see collections of suspect "File not found" errors in my Apache logs, basically using the pattern File does not exist: /var/www/file, referer: http://my.server.com/file In human terms: The file was not found, though it referenced here itself. A clear hacking attempt, as that's hardly possible (and the REQUEST_URIs often enough suggest the same). In my eyes a clear case for fail2ban – if I could get backreferences to work here: failregex = ^%(_apache_error_client)s File does not exist: /var/www(.+), referer: http://.+\1$ (Justin Case: above examples assume the DIRECTORY_ROOT of that webserver being /var/www) I googled for hours, searched the fail2ban wiki up and down – but nowhere I could find a statement concerning backreferences in its filters. Are they not supported, or did I do it the wrong way? Any hints how to make it work (except from "dirty hacks" like first sending the request to another fake url using mod-rewrite, and then catching on that (if anyone is interested, I can elaborate on that approach in an answer), or doing something similar using mod-security)? as an entire log line was requested: [Fri Nov 08 14:57:28 2013] [error] [client 50.67.234.213] File does not exist: /var/www/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found;, referer: http://www.myserver.com/text/files.htm++++++++++++++++++++++++++Result:+using+proxy+27.34.142.47:9090;+no+post+sending+forms+are+found; (sorry, logs were just switched, so this long candidate was the only one left currently; minor adjustments were made for privacy reasons)

    Read the article

  • Uneditable file and Unreadable(for further processing) file( WHY? ) after processing it through C++

    - by mgj
    Hi...:) This might look to be a very long question to you I understand, but trust me on this its not long. I am not able to identify why after processing this text is not being able to be read and edited. I tried using the ord() function in python to check if the text contains any Unicode characters( non ascii characters) apart from the ascii ones.. I found quite a number of them. I have a strong feeling that this could be due to the original text itself( The INPUT ). Input-File: Just copy paste it into a file "acle5v1.txt" The objective of this code below is to check for upper case characters and to convert it to lower case and also to remove all punctuations so that these words are taken for further processing for word alignment #include<iostrea> #include<fstream> #include<ctype.h> #include<cstring> using namespace std; ifstream fin2("acle5v1.txt"); ofstream fin3("acle5v1_op.txt"); ofstream fin4("chkcharadded.txt"); ofstream fin5("chkcharntadded.txt"); ofstream fin6("chkprintchar.txt"); ofstream fin7("chknonasci.txt"); ofstream fin8("nonprinchar.txt"); int main() { char ch,ch1; fin2.seekg(0); fin3.seekp(0); int flag = 0; while(!fin2.eof()) { ch1=ch; fin2.get(ch); if (isprint(ch))// if the character is printable flag = 1; if(flag) { fin6<<"Printable character:\t"<<ch<<"\t"<<(int)ch<<endl; flag = 0; } else { fin8<<"Non printable character caught:\t"<<ch<<"\t"<<int(ch)<<endl; } if( isalnum(ch) || ch == '@' || ch == ' ' )// checks for alpha numeric characters { fin4<<"char added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; if(isupper(ch)) { //tolower(ch); fin3<<(char)tolower(ch); } else { fin3<<ch; } } else if( ( ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 != ' ' ) { fin3<<' '; } else if( (ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 == ' ' ) { //fin3<<" '; } else if( !(int(ch)>=0 && int(ch)<=127) ) { fin5<<"Char of ascii within range not added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; } else { fin7<<"Non ascii character caught(could be a -ve value also)\t"<<ch<<int(ch)<<endl; } } return 0; } I have a similar code as the above written in python which gives me an otput which is again not readable and not editable The code in python looks like this: #!/usr/bin/python # -*- coding: UTF-8 -*- import sys input_file=sys.argv[1] output_file=sys.argv[2] list1=[] f=open(input_file) for line in f: line=line.strip() #line=line.rstrip('.') line=line.replace('.','') line=line.replace(',','') line=line.replace('#','') line=line.replace('?','') line=line.replace('!','') line=line.replace('"','') line=line.replace('?','') line=line.replace('|','') line = line.lower() list1.append(line) f.close() f1=open(output_file,'w') f1.write(' '.join(list1)) f1.close() the file takes ip and op at runtime.. as: python punc_remover.py acle5v1.txt acle5v1_op.txt The output of this file is in "acle5v1_op.txt" now after processing this particular output file is needed for further processing. This particular file "aclee5v1_op.txt" is the UNREADABLE Aand UNEDITABLE File that I am not being able to use for further processing. I need this for Word alignment in NLP. I tried readin this output with the following program #include<iostream> #include<fstream> using namespace std; ifstream fin1("acle5v1_op.txt"); ofstream fout1("chckread_acle5v1_op.txt"); ofstream fout2("chcknotread_acle5v1_op.txt"); int main() { char ch; int flag = 0; long int r = 0; long int nr = 0; while(!(fin1)) { fin1.get(ch); if(ch) { flag = 1; } if(flag) { fout1<<ch; flag = 0; r++; } else { fout2<<"Char not been able to be read from source file\n"; nr++; } } cout<<"Number of characters able to be read: "<<r; cout<<endl<<"Number of characters not been able to be read: "<<nr; return 0; } which prints the character if its readable and if not it doesn't print them but I observed the output of both the file is blank thus I could draw a conclusion that this file "acle5v1_op.txt" is UNREADABLE AND UNEDITABLE. Could you please help me on how to deal with this problem.. To tell you a bit about the statistics wrt the original input file "acle5v1.txt" file it has around 3441 lines in it and around 3 million characters in it. Keeping in mind the number of characters in the file you editor might/might not be able to manage to open the file.. I was able to open the file in gedit of Fedora 10 which I am currently using .. This is just to notify you that opening with a particular editor was not actually an issue at least in my case... Can I use scripting languages like Python and Perl to deal with this problem if Yes how? could please be specific on that regard as I am a novice to Perl and Python. Or could you please tell me how do I solve this problem using C++ itself.. Thank you...:) I am really looking forward to some help or guidance on how to go about this problem....

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • Radeon 6950 - Garbling of text and graphics in certain Windows only

    - by Greg
    This morning I noticed the text in Gmail (in Firefox 4) looked a little funny (kind of thin, maybe some color fringing). I went to work and thought it might be some ClearType issue or something with the way Direct way that FF4 draws to the screen. When I came back from work (I left the computer on), the problem was much worse - way beyond ClearType nit-picking. The text was barely readable. I opened Chrome and there was no such problem. It seems like only Windows that use hardware acceleration are garbled, and ones that use GDI are not. But, I fired up Dragon Age and didn't notice any problems (I only really looked at the main menu though). Here is a link to a screen shot that illustrates the problem. Notice how the Windows Live Mesh window is completely unreadable, the text in Firefox 4 (left) is pretty bad, while Chrome, the Windows Control Panel, and the task bar are perfectly fine. The fact that the problem shows up in screen shots and that it only happens in certain Windows makes me confident that the problem cannot be with the monitor or DVI cable. I am using the AMD Radeon drivers from 4/27/11. The card I have (MSI Frozr II) came with a slight overclock (810Mhz) out of the box, but it looks like when I'm on the Windows desktop it's not running at full clock (CCC reports 450Mhz). Still, I underclocked it to the stock reference clock (800Mhz) and it made no difference. The idle temperature according to Afterburner is 42-44 Celsius, which seems a tad high but not enough to cause a problem - it's cold to the touch if I open up the machine. What the heck could be causing this? The problem varies in intensity. As we speak I'm in Firefox and things look better than they did earlier - it'll probably get worse again soon. Radeon 6950 (MSI Frozr II), Seasonic X 560, Core i5 2500K at stock clockspeeds, 16GB RAM, Asus P8P67 M Pro

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

  • Timeout option not working on efi windows 7/windows8 dual boot machine

    - by Guenter
    I hav a gigbyte GA-Z77m-D3h mobo and installed Windows 8 Pro and Windows 7 Ultimate on two SSDs (in that order) in EFI mode. Now when I start my computer, I get the windows boot menu (text mode) with the two OSses to choose, but I have to manually press RETURN to have the computer boot into the Win OS. Even if I wait an hour, no default action takes place. Using bcdedit (from either of the OSses) I can successfully change the time out value, and it shows up in the bcdedit (no params) output. But it doesn't fire ... Here is my current BCDEdit output (headers are in German, but values should be readable): Windows-Start-Manager --------------------- Bezeichner {bootmgr} device partition=O: path \EFI\Microsoft\Boot\bootmgfw.efi description Windows Boot Manager locale de-DE inherit {globalsettings} integrityservices Enable default {default} resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} displayorder {default} {current} {5ad2802a-c60a-11e2-acdb-80331c501b11} {5ad28028-c60a-11e2-acdb-80331c501b11} {5ad28029-c60a-11e2-acdb-80331c501b11} toolsdisplayorder {memdiag} timeout 5 displaybootmenu Yes Windows-Startladeprogramm ------------------------- Bezeichner {default} device partition=W: path \Windows\system32\winload.efi description Windows 7 locale de-DE inherit {bootloadersettings} recoverysequence {5ad2802e-c60a-11e2-acdb-80331c501b11} recoveryenabled Yes osdevice partition=W: systemroot \Windows resumeobject {5ad2802c-c60a-11e2-acdb-80331c501b11} nx OptIn Windows-Startladeprogramm ------------------------- Bezeichner {current} device partition=C: path \Windows\system32\winload.efi description Windows 8 locale de-DE inherit {bootloadersettings} recoverysequence {5ad28033-c60a-11e2-acdb-80331c501b11} integrityservices Enable recoveryenabled Yes isolatedcontext Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \Windows resumeobject {5ad28031-c60a-11e2-acdb-80331c501b11} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto (this output is from Win8; the Win7 looks nearly identical) If maybe the problem comes from a bad EFI Windows boot manager installation, can this be fixed without loosing my windows installations?

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • Monitor mode 802.11 captures on OSX

    - by Mike A
    I'm trying to determine the difference between capturing 802.11 frames in the following ways on OSX (10.8.5). It's a bit esoteric, but I use "Option 2" to capture frames for later analysis, and am wondering if I'm missing something. Option 1: use "airportd": $sudo /usr/libexec/airportd en0 sniff Option 2: use "airport" followed by tcpdump: sudo /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport --channel= sudo tcpdump -I -P -i en0 -w /tmp/capture.pcap (or alternatvely eliminate the -w and watch packets real-time). From what I can tell: Both commands, according to the wifi icon on OSX, put the interface into 'monitor' mode. Both commands output a pcap file that is readable in both wireshark/tcpdump & Eye PA. Both commands appear to capture management, control and data frames. The rub: Option 1 disconnects you from the network. This is expected, when putting an interface into 'monitor' mode. Option 2 does NOT disconnect you, provided you've set the channel to the same channel your currently connected to. This has a distinct advantage of keeping your connection up while capturing in monitor mode. My question: Option 2 does not seem like it should work, or more specifically, it does not seem like I should be able to remain connected while also capturing frames in monitor mode. On a wired NIC, you can be 'promiscuous' and still send frames, though I didn't think the same was true for wireless NIC. I'm questioning the validity of capturing frames w/ Option 2?

    Read the article

  • tomcat 'document base does not exist' error (but it does)

    - by SpliFF
    Gentoo / Tomcat 6 INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 Sep 8, 2009 10:34:51 AM org.apache.catalina.core.StandardContext resourcesStart SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base /www/rivervalley/site does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(Unknown Source) at org.apache.catalina.core.StandardContext.resourcesStart(Unknown Source) at org.apache.catalina.core.StandardContext.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) at org.apache.catalina.core.StandardHost.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) oh really? then how come: ls -la /www/rivervalley/site/ drwxr-xr-x 12 tomcat tomcat 4096 Sep 8 09:56 . drwxr-xr-x 16 tomcat tomcat 4096 Jun 29 16:22 .. -rwxr--r-- 1 tomcat tomcat 520 Jul 3 02:15 Application.cfm drwxr-xr-x 2 tomcat tomcat 4096 Sep 8 09:56 WEB-INF and ... tomcat 18916 1.0 5.5 1159188 167892 ? Ssl 10:37 0:11 /opt/sun-jdk-1.5.0.18/bin/java -Djava.util.loggin Hell, ANY account can read that directory so the claim is utter nonsense. What else can cause this? Here's my relevant server.xml section: <Host name="rivervalley" appBase="webapps" unpackWARs="false" autoDeploy="false" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/www/rivervalley/site" /> </Host>

    Read the article

  • How do you make Google's interface always be in your chosen language?

    - by Michael Wolf
    Google's interface and search results don't always appear in my preferred language, English. I'm located in Mexico City and, although I generally have no problem with Spanish, I would prefer search results in English most of the time. (The exception is when I'm using search terms in Spanish.) I'd also prefer the interface to be in English, but that's far less important to me than search results. Google looks at your IP to decide where you're coming from and thus what language to present results in. So, when I type www.google.com into the URL bar, it redirects me to www.google.com.mx. Is there a way to force Google to use one language all the time? Here are some things I've done and tried: 0) I have a Google account, and I've configured it such that it should know that English is my preferred language. I don't often explicitly log out of Google, so generally Google knows I'm me and my preferences when I access its services. 1) I've configured my browser to ask for pages in English. Very few sites support this feature at all; Google isn't one of them. 2) From www.google.com.mx, I can click on "Google.com in English". This works until, I think, I close the browser. 2a) From www.google.com.mx, I can go into account configuration, which is English. From then on, everything's in English. 3) I can append &hl=en (Human Language = English) to the end of the URLs of result pages. 2, 2a, and 3 all "work", but they're all mildly annoying. I'd rather avoid them if I could. (At the risk of stating the obvious, English and Spanish are the languages I'm dealing with, but I imagine that, say, a francophone using Google from Korea would run into basically the same issue.)

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >