Search Results

Search found 318 results on 13 pages for 'obscure'.

Page 8/13 | < Previous Page | 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • SEO to ensure visibility for a narrow, non-competitive, non-commercial site

    - by hen3ry
    I'm webmaster of a non-commercial site in English. A non-native-English speaker asked me why our site doesn't produce hits in Google searches she conducts for relevant keywords in her native language. I asked her for a list of keywords in her native language, and I naively tried inserting those into the META info in the page headers and waited a couple of weeks. No help. A little searching informed me that Google doesn't use the META info, and has not done so for a very long time. D'oh! To be entirely concrete, suppose the StackExchange folks want Russian speakers to find this site, Pro Webmasters. The direct translation in Russian of "webmaster" --thanks, Google Translator-- is: "?????????". (Not sure this will render properly, but that's not essential to my question.) Assuming Pro Webmasters has a common template for all pages it generates, inserting "?????????" into the Keywords META for that template won't help, it seems. What could StackExchange do to make this site visible to users searching with the Russian keyword "?????????" ? Pretty much all the advice I've seen boils down to this, if I understand correctly: use the desired search term often (but not too often) among site content, and the problem will be solved. That's great, but I don't think sprinkling "?????????" visibly all over Pro Webmasters is going to fly. Just for completeness, crawlers must be long immune to the invisible-to-visitors scheme, e.g, format "?????????" in a tiny text size in a color the same as an existing background, e.g. white-over-white. Or, put that text inside a div styled: ' style="visibility: hidden" '. Probably some other equivalents. I can only think of one slightly effective method, along these lines: place an unobtrusive link on the common template to a page titled "for international users" , and on that page list desired synonyms for "webmaster" in various languages on that page. A test case --admittedly, just one-- using my site implies that a Google search for "international users" ????????? will produce a hit for this page, and thus make the site minimally visible, despite the fact that the page will almost never be visited. At the moment, anyway. Note: All the SEO discussions I have found so far are about competitive and --almost certainly-- commercial sites. To repeat: my site is non-commercial, and it is about an obscure, narrow topic that is of interest to only a small number of people worldwide. This isn't about clawing our way to the top of competitive rankings, just making this content minimally visible to interested non-native-English speakers. Ideas? TIA

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • Writing or extending existing emacs packages: is it worth or should I move to Netbeans/Eclipse?

    - by Andrea
    I'm finishing my master degree course in CS and I've almost become addicted to Emacs. I've used it to write in C, Latex, Java, JSP,XML, CommonLisp, Ada and other languages no other editor supported, like AMPL. I'd like to improve the packages I've been using the most or create new ones, but, in practice, I find that the implementation of Emacs leaves a lot to be desired. There are a lot of poorly-featured/poorly-maintained packages with either overlapping functionalities or obscure incompatibilities, and Elisp just seems to foster the situation by lacking the common features modern lisps have. In contrast Eclipse and Netbeans are actively improved and it does seem they can be effective for non-mainstream languages. I tried Hibachi for Ada in Eclipse and it worked well, there's CUPS for Lisp in Eclipse and LambdaBeans built using NetBeans components. On the other hand those plugins seem to be less active than their Emacs' counterparts, for example Hibachi was archived last year. What's your opinion on this? Which editor should I write extension for? EDIT: To answer Larry Coleman (see comment below): I like Emacs as a user because it is efficient both for me and the computer I'm using. It's fast and the textual interface (i.e. minibuffer) allows for quick interaction. It's solid and packages are usually small and easy to manage. If I need to correct or remove something I usually just have to change a row in my .emacs or an elisp file, or delete a directory. Eclipse plugins rely on a more complicated process that screwed my Eclipse configuration a couple of times, forcing me to do a clean reinstall. Emacs works as long as I use the basic packages. If I need something more complicated the situation gets pretty hairy. As a "power user" I think that the best I can hope for is to write a severely crippled version of the extensions I'd actually like to have; in other words, that it's not worth the trouble. I'd like to write extensions for the things I'd like to have automated in Emacs, for example project support with automated tag-table update on file writing. There are a few projects on this that lack integration, documentation, extensibility and so forth. The best one is probably CEDET, for which I believe the Greenspun's 10th rule can be applied. EDIT: To comment Larry Coleman's answer I'm pretty sure I can pick elisp programming but the extensions I have in mind don't exist yet despite their relative simplicity and the effort more knowledgeable people poured into related projects.This makes me wonder whether it is so because of the way emacs is developed, i.e. people tend to write their own little extensions without coordination, or its implementation, its extension language not being able to keep up with the growing complexity.

    Read the article

  • Changing Your Design for Testability

    Sometimes I come across a way of putting something that it is pithy good, not Hallmark trite, but an impactful and concise way of clarifying a previously obscure concept. A recent one of these happy occurrences was when I was reading the excellent Art of Unit Testing by Roy Osherove. After going through the basics of why youd want to test code and how to do it, Roy confronts a frequent objection to having unit tests, that it ends up changing how you design your components: When we write unit tests for our code, we are adding another end user (the test) to the object model. That end user is just as important as the original one, but it has different goals when using the model.  The test has specific requirements from the object model that seem to defy the basic logic behind a couple of object-oriented principles, mainly encapsulation. [emphasis added by me] When I read this, something clicked for me. I used to find it persuasive that because unit tests caused you to change your design they were more disruptive than they were worth. The counter argument I heard is that the disruption was OK, because testable design was just obviously better. That argument was not convincing as it seemed like delusional arrogance to suggest that any one of type of design was just inherently better for the particular applications I was building. What was missing was that I was not thinking of unit tests as an additional and equal end user to my design. If I accepted that proposition, than it was indeed obvious that a testable design was better because now all users of my component would be satisfied. Have I accepted that proposition? Id phrase it slightly different. I find more and more that having unit tests helps me write better, less buggy code before it gets to production or QA. As I write more unit tests, it gets easier to see how to create testable components, so I dont feel like its taking me as much extra time up front. I pick and choose components that seem most likely to benefit from automated tests and it is working out nicely. If you already implement Test Driven Development, this whole post was probably a waste of your time <g> If you hate the idea of unit tests, well, probably not a great value prop for you either. However, if you are somewhere in between, at least take a minute and check out a sample chapter from Roys book at: http://www.manning.com/osherove/.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • is a factory pattern to prevent multuple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquly define what the object is (what node, what edge etc), and the oens that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methdos, so these objects all act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approch which accepts a Builder object and construct the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquly defines the type of object (this is node A nto node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows it's state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overiding of serlization methods to make it work (ensure when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in it's place), as well as using a weakHashMap as if it was a weakHashSet to know rather an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to it's own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state taht is different then my memory; to tell my memory when it's own state is inconsistent. I believe this is possible despit their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect I'm posting).

    Read the article

  • Is there a factory pattern to prevent multiple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquely define what the object is (what node, what edge etc), and the others that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methods, so all these objects act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approach which accepts a Builder object and constructs the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquely defines the type of object (this is node A to node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows its state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overriding of serialization methods to make it work (ensure that when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in its place), as well as using a weakHashMap as if it was a weakHashSet to know whether an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to its own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state that is different than my memory; to tell my memory when its own state is inconsistent. I believe this is possible despite their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect that I'm posting).

    Read the article

  • Windows 2003 print services for unix causing CUPS "lpd_command returning 1"

    - by Stephen P. Schaefer
    We have several Windows 2003 servers with print services for Unix on them, and which allow Linux machines running CUPS to use printers defined to CUPS with the URI lpd://printer_server/printer_queue_name - they work. An attempt to provide different printers on a different Windows 2003 server with print services for Unix newly enabled causes CUPS to behave like this: a newly defined printer will be in state "Idle". An attempt to print causes CUPS to change the printer state to "Disabled". In /var/log/cups/error_log, the relevant messages appear to be D [01/Dec/2012:06:14:18 -0800] [Job 16] lpd_command 02 hp775cm_ps D [01/Dec/2012:06:14:18 -0800] [Job 16] Sending command string (16 bytes)... D [01/Dec/2012:06:14:18 -0800] [Job 16] Reading command status... D [01/Dec/2012:06:14:18 -0800] [Job 16] lpd_command returning 1 E [01/Dec/2012:06:14:18 -0800] PID 18786 stopped with status 1! Since my Linux boxes can print to other printers via other Windows 2003 print spoolers, I'm wondering what obscure Windows component could be causing this. I don't think it is Windows firewall, since nmap sees the lpd port (515) open on the server. telnet to the server at port 515 declares Connected to server.internal.example.com (10.22.33.44). Escape character is '^]' Connection closed by foreign host. Windows clients successfully print to the CIFS/SMB share of the hp755cm_ps printer. What other reasons are there for Windows to refuse an lpd request?

    Read the article

  • How to use cyg-wrapper to fork a new tab in win32 gvim

    - by Peter Nore
    I would like to set up an alias in my cygwin .bashrc that translates pathnames unix-to-dos and passes them to windows gvim in a new tab of an existing instance. I am trying to use Luc Hermitte's cyg-wrapper script for running native win32 applications from Cygwin as per this vim tip. Luc's example of how to use his script is: alias vi= 'cyg-wrapper.sh "C:/Progra~1/Edition/vim/vim63/gvim.exe" --binary-opt=-c,--cmd,-T,-t,--servername,--remote-send,--remote-expr' I do not understand this example because most of these vim parameters (-c,--cmd,--servername,--remote-send,--remote-expr, etc) require more information, and I have not found any example of how to supply the additional information to cyg-wrapper.sh. For example, calling C:/Progra~1/Edition/vim/vim63/gvim.exe --servername=GVIM --remote-tab-silent file1 & will open file1 in a new tab of existing (or non existing) instance GVIM, but calling gvim --servername accomplishes nothing on its own. Unfortunately, though, the corresponding cyg-wrapper phrase does not work: cyg-wrapper.sh "C:/Progra~1/Edition/vim/vim63/gvim.exe" --binary-opt=--servername=GVIM,--remote-tab-silent --fork=2 file1 If ran twice, this actually opens up two instances of gvim; it is as if the servername 'GVIM' is being stripped and ignored. How do you supply a servername to gvim --servername or a .vimrc to gvim -u using cyg-wrapper.sh? Furthermore, why is it that programs must be passed to cyg-wrapper.sh in the relatively obscure "mixed form?" For example, if I try cyg-wrapper.sh "/cygdrive/c/path/to/GVimPortable.exe" --binary-opt=--servername=GVIM,--remote-tab-silent --fork=2 I get "Invalid switch - "/cygdrive"." See also: getting-gvim-to-automatically-translate-a-cygwin-path alias-to-open-gvim-cream-version-from-cygwin-shell

    Read the article

  • Redhat 5.5: Multi-thread process only uses 1 CPU of the available 8

    - by Tonny
    Weird situation: Redhat Enterprise 5.5 (stock install, no updates, x64) on a HP z800 workstation. (Dual Xeon 2,2 Ghz. 8 cores, 16 if you count Hyper-threading. RH sees 16 cores.) We have an application that can utilize 1, 2 or 4 threads for heavy calculations. Somehow all these threads run on the same core at 100% load (the other 15 cores are nearly idle) so there is absolutely no benefit from the extra threads. In fact there is a slight slowdown as the threads get in each others way on the single core. How do I get them to run on separate cores (if possible)? Application is 64 bit. Can't change anything about the software except changing the threads setting. Is there some obscure Linux setting I can try to change? (I'm a True64 and Aix guy. I use Linux, but have no in depth knowledge of the process scheduling on Linux.)

    Read the article

  • Is there any way to customize the Windows 7 taskbar auto-hide behavior? Delay activation? Timer?

    - by calbar
    I'm becoming increasingly frustrated with the way Windows 7 handles showing a hidden taskbar. It's incredibly over-eager to pop out and obscure what I'm really trying to interact with, requiring me to move the mouse away, wait for it to auto-hide again, then resume what I was doing but more deliberately. After closely examining the behavior, it appears that a hidden taskbar "peeks out" from the edge by 2 or 3 pixels, and slowly moving your mouse into this area activates it; you don't even need to touch the edge of the screen. I would love it if there was a way to customize or change this behavior. Ideally, the taskbar would only pop out if you are actively "pushing" the edge of the screen it is hidden on. So activation only occurs once you've reached the screens edge and continue to move the mouse past a customizable threshold. Alternatively, a simple activation delay would suffice as well. So only if the mouse remains in that 2-3 pixel area (a.k.a. on the taskbar) for greater than a customizable amount of time does it pop out again. This would only be a fraction of a second. Often times the cursor simply "careens" off the edge of the screen while trying to focus on something nearby. Anyway, if there are any registry settings or utilities that can achieve either of these effects, that would be great! Thanks for your help.

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

  • (Mac Intel) HP PS driver prints in B&W from Adobe Reader after installing Cannon PS driver

    - by JohnB
    I have a unique problem that leaves me at a loss as to where to start troubleshooting. We have three Macs we use for graphics, two of which are PowerPC and one which is Intel. They are set up to print to an HP 5500dn, but occasionally this printer gets tied up with a massive print job, so I installed the PS driver (iR-PSv1.81MacOSX) for the Cannon C3200 Printer/copier on each of the machines. Both of the PowerPC Macs installed without issue, but the Intel Mac exhibits strange behavior: I've confirmed that while the Cannon driver is installed (whether or not the Cannon is set up for printing in print settings), the HP 5500dn will print in color from Safari, but only prints in black and white from Adobe Reader. The Cannon printer itself has not exhibited any strange behavior As soon as the Cannon driver is uninstalled, the HP 5500dn prints in color from Adobe Reader again. We run a network of Windows PCs, and the 'Mac room' mostly takes care of itself, so we don't have any experienced Mac administrators onsite. The Cannon is capable of Appletalk, but the PS driver seemed easier to work with (and Appletalk is currently disable on the Cannon. I'm not against using the Appletalk compatible drivers, but I would rather use the PS driver if at all possible - I don't want to open up the proverbial can of worms. If someone has any clues or suggestions that would help troubleshoot this problem, I would be grateful. I've already done some googling, but due to the obscure nature of this problem, I haven't been very successful. I don't like to create multiple threads on multiple sites, but I'm posting here due to Chopper3's suggestion on my post on ServerFault (http://serverfault.com/questions/135349/mac-intel-hp-ps-driver-prints-in-bw-from-adobe-reader-after-installing-cannon)

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • Identifying Httpd error log in Fedora 16

    - by Cerin
    How do you find the cause of httpd errors in Fedora 16? The new systemctl command in Fedora 16 seems to horribly obscure any useful logging info. [root@host ~]# systemctl start httpd.service Job failed. See system logs and 'systemctl status' for details. [root@host ~]# systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; enabled) Active: failed since Thu, 21 Jun 2012 16:26:56 -0400; 1min 23s ago Process: 2119 ExecStop=/usr/sbin/httpd $OPTIONS -k stop (code=exited, status=0/SUCCESS) Process: 2215 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=1/FAILURE) Main PID: 1062 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service So the first command fails...and it tells me to run another command...which simply tells me that the command returned an error code. Where's the actual error? Even more frustrating is nothing seems to have been written to the logs: [root@host ~]# ls -lah /var/log/httpd/ total 8.0K drwx------. 2 root root 4.0K Jun 21 16:19 . drwxr-xr-x. 21 root root 4.0K Jun 20 16:33 .. -rw-r----- 1 root root 0 Jun 21 16:19 modsec_audit.log -rw-r----- 1 root root 0 Jun 21 16:19 modsec_debug.log

    Read the article

  • (Mac Intel) HP PS driver prints in B&W from Adobe Reader after installing Cannon PS driver

    - by John B
    I have a unique problem that leaves me at a loss as to where to start troubleshooting. We have three Macs we use for graphics, two of which are PowerPC and one which is Intel. They are set up to print to an HP 5500dn, but occasionally this printer gets tied up with a massive print job, so I installed the PS driver (iR-PSv1.81MacOSX) for the Cannon C3200 Printer/copier on each of the machines. Both of the PowerPC Macs installed without issue, but the Intel Mac exhibits strange behavior: I've confirmed that while the Cannon driver is installed (whether or not the Cannon is set up for printing in print settings), the HP 5500dn will print in color from Safari, but only prints in black and white from Adobe Reader. The Cannon printer itself has not exhibited any strange behavior As soon as the Cannon driver is uninstalled, the HP 5500dn prints in color from Adobe Reader again. We run a network of Windows PCs, and the 'Mac room' mostly takes care of itself, so we don't have any experienced Mac administrators onsite. The Cannon is capable of Appletalk, but the PS driver seemed easier to work with (and Appletalk is currently disable on the Cannon. I'm not against using the Appletalk compatible drivers, but I would rather use the PS driver if at all possible - I don't want to open up the proverbial can of worms. If someone has any clues or suggestions that would help troubleshoot this problem, I would be grateful. I've already done some googling, but due to the obscure nature of this problem, I haven't been very successful.

    Read the article

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

  • Does my Oracle DBA need root access?

    - by Dr I
    I'm currently discussing with my Oracle DBA Collegue that request a root access on our production servers. I'm not so hot to let him use the root access on our production servers. He is arguing that he need it to perform some operations like restarting the server and some other obscure arguments. The point is that I'm not agree with him because I've set him a Oracle user/group and a dba group where Oracle user belong. Everything is running smoothy and without any root permissions for now. I also think that all administrative tasks like scheduled server restart and so one need to be operated by the proper administrator (The Systems administrator on our case) to avoid any kind of issues related to a misunderstanding of the infrastructure interactions. So, I need the help of both, sysadmins and Oracle DBAs to lead me on the correct direction. If my collegue really need this rights I'll give him, but I'm just basically quite affraid of that because of security and integrity concerns. I know that my collegue is really good as a Oracle DBA and he know is work very well, but I also know that I've very few cases where a software and its admin really need root access. Once again, I'm not looking for pros/cons but rather an advice on the way that I should take to deal with this situation.

    Read the article

  • Identifying Service Error in Fedora 16

    - by Cerin
    How do you find the cause of a failed service start in Fedora 16? The new systemctl command in Fedora 16 seems to horribly obscure any useful logging info. [root@host ~]# systemctl start httpd.service Job failed. See system logs and 'systemctl status' for details. [root@host ~]# systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; enabled) Active: failed since Thu, 21 Jun 2012 16:26:56 -0400; 1min 23s ago Process: 2119 ExecStop=/usr/sbin/httpd $OPTIONS -k stop (code=exited, status=0/SUCCESS) Process: 2215 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=1/FAILURE) Main PID: 1062 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service So the first command fails...and it tells me to run another command...which simply tells me that the command returned an error code. Where's the actual error? Even more frustrating is nothing seems to have been written to the logs: [root@host ~]# ls -lah /var/log/httpd/ total 8.0K drwx------. 2 root root 4.0K Jun 21 16:19 . drwxr-xr-x. 21 root root 4.0K Jun 20 16:33 .. -rw-r----- 1 root root 0 Jun 21 16:19 modsec_audit.log -rw-r----- 1 root root 0 Jun 21 16:19 modsec_debug.log

    Read the article

  • Xen Bridge only working when IP Assigned

    - by m.sr
    Hey! Just had an (in my sense) obscure situation. I have a Xen Server with bridged networking. Everything works fine since month. A while ago i configuresd a second bridge. only some DomUs get an channel on this bridge - my Dom0 doesn't need to / should'nt use this bridge. So just 5 minutes ago while rebooting the xen host (because of an other problem with the UPS) i decided to removed the fixed ip from the the interface of the Dom0 which belongs to the second bridge. So after reboot i noticed that none of the interfaces on the second bridge is available. I couldn't find a problem. Everything was just like before the reboot, except the interface of the Dom0 had no IP address. After a while i tried to give the DomO interface of this bridge an IP again and ... BOOM ... everything is up and running again! WTF? Why is it important to have the interface of a bridge configured in the Dom0? Even when confiugured 'wrong' (complitely different netowkr settings as the network really hanging on the bridge) everythjing works fine ... I don't get it. Could please someone explain? Tnaks a lot!

    Read the article

  • Windows 8 using as a webserver

    - by Jason
    I have a few hobby websites that I currently host on CentOS 6. Apache, mail serving, PHP, MySQL nothing special. In the past I used Windows XP to do this same task, for years, and I was OK. I switched to Linux and for the last few years it has been such a pain. updates break, certain apps only support certain distros without compiling from source. It prevents me from working on my hobby sites more because I am always fixing something. With Windows I locked it down, I run a hardware firewall and packet analyser, kept up on updates and A/V and never had a problem. I dont allow RDC from outside the local LAN, no FTP open, run OpenSSH on an obscure port.. I am considering switching to Windows 8 (since it is a cheaper license now that Windows 7) and running apache, HMailServer, PHP, MySQL, just like my CentOS install. My questions: I am not familiar with Windows 8, can the above be done like XP? No new security restrictions or the OS preventing this from happening? The machine is a Athlon 64-bit X2 with 32GB of RAM. Will Windows 8 see all of the RAM? Technically the machine came with Windows 7, and there is a serial number on it but I am sure I wiped away the Windows 7 recovery partition when I switched to Linux....

    Read the article

  • Is there any way to customize the Windows 7 taskbar auto-hide behavior? Delay activation? Timer?

    - by calbar
    I'm becoming increasingly frustrated with the way Windows 7 handles showing a hidden taskbar. It's incredibly over-eager to pop out and obscure what I'm really trying to interact with, requiring me to move the mouse away, wait for it to auto-hide again, then resume what I was doing but more deliberately. After closely examining the behavior, it appears that a hidden taskbar "peeks out" from the edge by 2 or 3 pixels, and slowly moving your mouse into this area activates it; you don't even need to touch the edge of the screen. I would love it if there was a way to customize or change this behavior. Ideally, the taskbar would only pop out if you are actively "pushing" the edge of the screen it is hidden on. So activation only occurs once you've reached the screens edge and continue to move the mouse past a customizable threshold. Alternatively, a simple activation delay would suffice as well. So only if the mouse remains in that 2-3 pixel area (a.k.a. on the taskbar) for greater than a customizable amount of time does it pop out again. This would only be a fraction of a second. Often times the cursor simply "careens" off the edge of the screen while trying to focus on something nearby. Anyway, if there are any registry settings or utilities that can achieve either of these effects, that would be great! Thanks for your help.

    Read the article

  • Why does HP Update at remote system trigger RDP printing at local system?

    - by lcbrevard
    This is obscure. When connected with RDP to another system that has HP Update installed on it, either directly running the HP Update or having the notification pop up to ask if you want to run HP Update causes the local system to try to print something to peculiarly-chosen-local-printer. Case 1: Desktop Win 7 Ult system RDP connected to HP Laptop Win 7 Ult system. When HP Update runs on the laptop a dialog for XPS Writer Save As... appears on Desktop system. Even if you put in a name, nothing gets generated and the dialog repeats. And repeats. Until you (a) close the RDP connection and (b) clean out the queued entries. If the HP Update pops up the request to run the update and you are not at the desk when this happens, there can be dozens of queued requests for this bogus printing. NOTE: the XPS Writer is not selected as a default printer on either system. Case 2: (Different) HP Laptop Win 7 Ult system RDP connected to XP Pro "brand X" desktop system but with HP printer drivers installed. If the request to run HP Update notification pops on the XP system, dozens of attempts to print, in this case to a Versa Check Printer driver, are queued. Dismissing the HP request, closing RDP, and cleaning out the queue are required to stop this. NOTE: the Versa Check Writer is not selected as a default printer on either system. THE QUESTION: What the heck is going on here? Some kind of scripting or COM activity that is misdirected?

    Read the article

  • Debugging nginx URL rewrite: How do I figure out where the problem is?

    - by pjmorse
    I have a specific URL pattern on a site which needs to be redirected to the HTTPS version. This is a Django site; Nginx checks each URL in memcached, and if it doesn't find a cached version it proxies the request to Apache/mod_python for Django to render the page. The relevant configuration block is rewrite ^/certificate https://mysite.com/certificate ; rewrite ^/([a-zA-Z]{2})/certificate https://mysite.com/certificate ; ...and it doesn't appear to be working at all. Nginx is: $ nginx -V nginx version: nginx/0.7.65 built by gcc 4.2.4 (Ubuntu 4.2.4-1ubuntu4) TLS SNI support disabled configure arguments: --prefix=/usr/local/nginx --pid-path=/var/run/nginx.pid --with-http_gzip_static_module --with-http_ssl_module How can I figure out if the problem is my patterns not matching, or a more obscure configuration problem? (The site is localized to three languages, and the localization is in the URL string, e.g. /US/news/, /DE/about, etc. It tracks localization in the session as well, defaulting to US, so if you just requested /news Django will rewrite to /US/news unless the user has a cookie indicating they're using a different localization. Django handles this, though, not Nginx.)

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13  | Next Page >