Search Results

Search found 4493 results on 180 pages for 'operator keyword'.

Page 147/180 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • VoIP setup for one external PSTN line

    - by Jcl
    I'm completely new to VoIP and the likes, and I'm trying to find information about what could be the best setup for this. I need 4 (maybe more in the future, but maximum 5 or 6) wireless extensions, connected to 1 PSTN line, and maybe 2 in the future. I've been trying to gather information about the gear needed but everything I find seems too much over-the-top (and extremely expensive). The main problem is that the physical place we are on doesn't have possibilities of having a decent internet connection, so using a external VoIP "virtual PBX" is not an option. Thing is, even if small, phone is critical to this organization. I currently have an analog DECT/GAP PBX which does what I need, however the PBX is very bad and the call quality is horrible, and that's why I want to change it. The requirements would be: 4 wireless terminals (routing cable is not an option), all of them ringing on incoming PSTN calls. Ability to do internal calls (4 separate offices) and ability to pass calls between terminals. The 4 terminals should be able to access the external PSTN line without dialing any special codes. Very important: terminals should be able to issue commands on the PSTN line to the external operator in the form *nn*nnnnnnnn# . Don't know wether this could face to be a problem, but I've had problems with analog PBX which would take any * as a PBX command and wouldn't allow terminals to send it to the external lines. Not so important, but would be nice to have: call waiting music Could anyone recommend such a setup? I need to be able to do this on a EXTREMELY LIMITED budget (that is: I don't have a limit, but all should get as much to zero as possible). I have enough spare powerful computers and a 300mbps wireless network which works just fine, so that's not to include in the budget. Don't really know if this is the best place to ask, but it's the most StackExchange-related site I've found to this subject.

    Read the article

  • Google chrome not accepting any security certificates

    - by Jerry
    I've recently developed a problem with Google Chrome that's really annoying. I'm using Firefox at the moment with no problems whatsoever and it's the same with IE, so it's safe to say this problem is specific to Chrome. The problem is that it's not accepting security certificates from certain sites. I suppose the best place to start would be google itself. I can't search. The google search page will load but when I type some search term into the search box and hit 'search' I get the message: "You attempted to reach www.google.com, but the server presented an invalid certificate. You cannot proceed because the website operator has requested heightened security for this domain." No matter what the search term is, this is the result. Also when I try to log in to facebook - same message. Youtube works and many other sites that I know present security certs so I'm baffled. I've searched and there are other people who have had similar issues but I can't find a solution anywhere. The most common answer I'm picking up for this is to "check your system time" but I can safely say that it's not my system time. If anyone knows what is going on, I'd very much appreciate being informed. It's not super urgent as I can use Firefox to access those places Chrome won't, but it IS super annoying because I can usually sort out issues like this in no time.

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • Restore passwd for root on a server

    - by s.mihai
    Hello,       I have a DVR server with linux embeded. It has some telnet functions but i don't have the password for it (the chinese manufacturer refuses to give me the password). I did get a upgrade folder from them and found a passwd file inside.       So i assume that when i upgrade the firmware the password in that file will be used.       Now i am trying to modify the file so taht i can insert a password i already know.       The problem is that i don't know how to create the password hash from what i figured the password hash is $1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961       The file is named rom.ko and i found a command telnetd /mnt/yaffs/web/boa -c /mnt/yaffs/web & /bin/cp -f /mnt/yaffs/rom.ko /etc/shadow in a script file so i assume this is the right way.       Can you help me reconstruct a password that i know already? Tell me how or make one for me :) ?... passwd file: root:$1$1/lfbDKX$Hmd.FqzB8IZEohPesYi961:0:0:99999:7:-1:-1:33637592 bin::10897:0:99999:7::: daemon::10897:0:99999:7::: adm::10897:0:99999:7::: lp::10897:0:99999:7::: sync::10897:0:99999:7::: shutdown::10897:0:99999:7::: halt::10897:0:99999:7::: mail::10897:0:99999:7::: news::10897:0:99999:7::: uucp::10897:0:99999:7::: operator::10897:0:99999:7::: games::10897:0:99999:7::: gopher::10897:0:99999:7::: ftp::10897:0:99999:7::: nobody::10897:0:99999:7::: next::11702:0:99999:7:::

    Read the article

  • Conditionally Rewrite Email Headers (From & Reply-To) Exchange 2010

    - by NorthVandea
    I have a client who maintains Company A (with email addresses %username%@companyA.com) and they own the domain companyB.com however there is no "infrastructure" (no Exchange server) set up specifically for companyB.com. My client needs to be able to have the end users within it's company (companyA.com) add a specific word or phrase to the Subject (or Body) line of the Outgoing email (they are only concerned with outgoing, incoming is a non-issue in this case) that triggers the Exchange 2010 servers to rewrite the header From and Reply-To [email protected] with [email protected] but this re-write should ONLY occur if the user places the key word/phrase in the Subject (or Body). I have attempted using Transport Rules and the New-AddressRewriteEntry cmdlet however each seems to have a limitation. From what I can tell Transport Rules cannot re-write the From/Reply-To fields and New-AddressRewriteEntry cannot be conditionally triggered based on message content. So to recap: User sends email outside the organization: From and Reply-To remain [email protected] User sends email outside the organization WITH "KeyWord" in the Subject or Body: From and Reply-To change to [email protected] automatically. Anyone know how this could be done WITHOUT coding a new Mail Agent? I don't have the programming knowledge to code a custom Agent... I can use any function of Exchange Management Shell or Console. Alternatively if anyone knows of a simple add-on program that could do this that would be good too. Any help would be greatly appreciated! Thank you!!!

    Read the article

  • Mac OS X duplex printing problem: one- vs. multi-paged documents

    - by Christian Lindig
    I like to print on pre-printed stationery using the Preview.app and a duplex-capable HP Color Laserjet 4700 (PostScript) printer. The print dialog handles one and two-paged documents differently: the paper needs to be placed differently into the tray if the document contains one page versus when it contains two pages. This is not obvious when printing on plain paper but becomes obvious when front and reverse side of sheets are marked. Otherwise the first page would end up on the reverse side of the first sheet. I believe the problem is caused by the printer driver setting duplex printing to false (using the PostScript setpagedevice operator) when emitting a single-page document versus keeping it set to true when emitting multi-page documents. All this despite that duplex printing is always specified in the printer dialog. When printing a single-sided document, duplex=true and duplex=false seem to make a difference with respect which side of a sheet gets printed on. It would be also helpful if others could confirm the problem actually exists. I suspect this problem is not limited to specific printers. I'm on OS X 10.6 and I checked two different HP printers.

    Read the article

  • RAM ok in memtest86+ == RAM ok after wake from sleep?

    - by twon33
    I have a Windows XP (32-bit) system that appears stable in normal operation, but was repeatably freezing (hard lock, no BSOD) a minute or so after waking from S3 sleep. Some Googling against the motherboard model and memory manufacturer suggested that I might need to bump up the memory voltage, so I tried it and it now seems to resume without freezing. However, I don't really trust it and I'd like to validate that it's actually stable, especially after resuming from sleep. I've run Prime95 for a few hours with no issues, and am planning an overnight run of Memtest86+, which I expect to pass because the system has been solid whenever I've run it without putting it to sleep. Does something like Memtest86+ exist that actually invokes S3 sleep during operation? Clearly it would need an operator to wake the computer to resume testing, but I don't think I've ever heard of a memory test tool that can do this. Alternately, am I wasting my time? Should a clean bill of health from Memtest86+ indicate stability regardless of whether sleep is involved, or, conversely, does my original problem indicate that Memtest86+ would have failed eventually with the stock voltage if I'd run it, sleep or not?

    Read the article

  • Cisco IOS BVI ACL: Only allow established UDP

    - by George Bailey
    Related: Cisco IOS ACL: Don't permit incoming connections just because they are from port 80 I know we can use the established keyword for TCP.. but what can we do for UDP (short of replacing a Bridge or BVI with a NAT)? Answer I found out what "UDP has no connection" means. DNS uses UDP for example.. named (DNS server) is lisenting on port 53 nslookup (DNS client) starts listening on some random port and sends a packet to port 53 of the server and notes the source port in that packet. nslookup will retry 3 times if necessary. Also the packets are so small that it does not have to worry about them coming in the wrong order. If nslookup receives a response on that port that comes from the servers IP and port then it stops listening. If the server tried to send two responses (for example a response and a response to the retry) then the server would not care if either of them made it because the client has the job to retry. In fact.. unless ICMP 3/3 packet gets through the server would not know about a failure. This is different from TCP where you get connection closed or timed out errors. DNS allows for an easy retry from the client as well as small packets.. so UDP is an excellent choice because it is more efficient. In UDP you would see nslookup sends request named sends answer In TCP you would see nslookup's machine sends SYN named's machine sends SYN-ACK nslookup's machine sends ACK and the request named's machine sends the response That is much more than is necessary for a tiny DNS packet

    Read the article

  • FreeBSD rc.d script doesn't work when starting up

    - by kastermester
    I am trying to write a rc.d script to startup the fastcgi-mono-server4 on FreeBSD when the computer starts up - in order to run it with nginx. The script works when I execute it while being logged in on the server - but when booting I get the following message: eval: -applications=192.168.50.133:/:/usr/local/www/nginx: not found The script looks as follows: #!/bin/sh # PROVIDE: monofcgid # REQUIRE: LOGIN nginx # KEYWORD: shutdown . /etc/rc.subr name="monofcgid" rcvar="monofcgid_enable" stop_cmd="${name}_stop" start_cmd="${name}_start" start_precmd="${name}_prestart" start_postcmd="${name}_poststart" stop_postcmd="${name}_poststop" command=$(which fastcgi-mono-server4) apps="192.168.50.133:/:/usr/local/www/nginx" pidfile="/var/run/${name}.pid" monofcgid_prestart() { if [ -f $pidfile ]; then echo "monofcgid is already running." exit 0 fi } monofcgid_start() { echo "Starting monofcgid." ${command} -applications=${apps} -socket=tcp:127.0.0.1:9000 & } monofcgid_poststart() { MONOSERVER_PID=$(ps ax | grep mono/4.0/fastcgi-m | grep -v grep | awk '{print $1}') if [ -f $pidfile ]; then rm $pidfile fi if [ -n $MONOSERVER_PID ]; then echo $MONOSERVER_PID > $pidfile fi } monofcgid_stop() { if [ -f $pidfile ]; then echo "Stopping monofcgid." kill $(cat $pidfile) echo "Stopped monofcgid." else echo "monofcgid is not running." exit 0 fi } monofcgid_poststop() { rm $pidfile } load_rc_config $name run_rc_command "$1" In case it is not already super clear, I am fairly new to both FreeBSD and sh scripts, so I'm kind of prepared for some obvious little detail I overlooked. I would very much like to know exactly why this is failing and how to solve it, but also if anyone has a better way of accomplishing this, then I am all open to ideas.

    Read the article

  • Mod_rewrite to eliminate query strings

    - by Greg Frommer
    Hi everyone, I have been working on this for a while but I'm not finding exactly what I am looking for. I am writing a webapp to let my users create and publish pieces of HTML content in a domain and URL folder structure of their choosing. All of the content and requested URL structures are stored in a database. I have all of the code in my index.php (in the root folder) to access the database content, and based on the server name (and hopefully folder structure) will pick out the proper content from the DB and display it to the end-users browser. So my situation looks like this: www.test.com/index.php?id=123234345 ... will display the proper page, but I want my users to be able to define a unique "page name" instead of using the numeric index (also I want to hide the /index.php part) so what I would like the end-user to see is: www.test.com/arbitrary-unique-keyword/keyword2/keyword3 which will invoke the index.php page in the root folder. Then I will use the PHP $_SERVER['PATH_INFO'] variable to match the requested folder structure up with the proper content in my database and display that. All the material I have found so far expects me to hard code parts of the folder structure into the rules.... but I think I want something simpler (perhaps). So the question in a nutshell: How do I use mod_rewrite to allow all "non-existent" folder paths be passed through to a main index.php residing in the root folder? (For all paths that DO exist, like for calls to images... I want those to succeed and not be directed to the index.php obviously) Thanks everyone, please let me know if I can clear anything up.

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • htaccess: multiple redirections depending on domain name

    - by Marcin Kmiec
    I have a server and a few domains and two webpages. Can't figure out how to do the following: A.com -> root\ www.A.com -> root\ B.com -> root\ www.B.com -> root\ C.com -> root\folder1\ www.C.com -> root\folder1\ By the way. What is the 'and' logical operator used in htaccess? I found that 'or' is [OR] but [AND] doesn't seem to work. And what is the language htaccess is written in:)? UPDATE I made a mistake in the question though. Here's what I'd really want to do. DNS is set for the domain A.com to point to the root folder of the server. Now I would like to set the following redirections: Any domain other than C.com and other than D.com redirects (301) to www.A.com. A.com points to the root folder of the server anyway and that is set in DNS. Domain www.C.com points to the folder 'folder1' on the server. Can it be set in htaccess? Now domains C.com, www.D.com and D.com redirects to www.C.com.

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • Mac Mavericks, ngircd localhost works, private IP doesn't

    - by user221945
    I have configured ngircd to listen on my private ip address. It doesn't. Localhost works fine. Configuration test: ngIRCd 21-IDENT+IPv6+IRCPLUS+SSL+SYSLOG+TCPWRAP+ZLIB-x86_64/apple/darwin13.2.0 Copyright (c)2001-2013 Alexander Barton () and Contributors. Homepage: http://ngircd.barton.de/ This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Reading configuration from "/opt/local/etc/ngircd.conf" ... OK, press enter to see a dump of your server configuration ... [GLOBAL] Name = irc.bellbookandpistol.com AdminInfo1 = Jaedreth AdminInfo2 = San Diego County CA, US AdminEMail = [email protected] HelpFile = /opt/local/share/doc/ngircd/Commands.txt Info = Server Info Text Listen = 10.0.1.5,127.0.0.1 MotdFile = MotdPhrase = "Welcome to irc.bellbookandpistol.com" Password = PidFile = Ports = 6667 ServerGID = wheel ServerUID = root [LIMITS] ConnectRetry = 60 IdleTimeout = 0 MaxConnections = 0 MaxConnectionsIP = 6 MaxJoins = -1 MaxNickLength = 9 MaxListSize = 0 PingTimeout = 120 PongTimeout = 20 [OPTIONS] AllowedChannelTypes = #&+ AllowRemoteOper = no ChrootDir = CloakHost = CloakHostModeX = CloakHostSalt = kBih5mu\kVI!DC6eifT(hd4m/0'zb/=: CloakUserToNick = no ConnectIPv4 = yes ConnectIPv6 = no DefaultUserModes = DNS = yes IncludeDir = /opt/local/etc/ngircd.conf.d MorePrivacy = no NoticeAuth = no OperCanUseMode = no OperChanPAutoOp = yes OperServerMode = no RequireAuthPing = no ScrubCTCP = no SyslogFacility = local5 WebircPassword = [SSL] CertFile = CipherList = HIGH:!aNULL:@STRENGTH DHFile = KeyFile = KeyFilePassword = Ports = [OPERATOR] Name = [REDACTED] Password = [REDACTED] Mask = [CHANNEL] Name = #BBP Modes = tnk Key = MaxUsers = 0 Topic = Welcome to the Bell, Book and Pistol IRC Server! KeyFile = As you can see, it should be listening on 10.0.1.5, but it isn't. After turning on Apache manually, port 80 works on 10.0.1.5, but port 6667 doesn't. It only works on localhost. Is there some terminal command I could use or some config file I could edit to get this to work?

    Read the article

  • ksh Auto-Completion PuTTY Configuration

    - by Nitrodist
    I'm having a bit of a problem configuring my PuTTY client to work with the auto-completion feature in the ksh shell. I do a listing on the root with the directories /home and /homeroot and it returns the directories in a list just fine. I can't select it, though, by hitting X = (where X is the number). /home/nitrodist>ls /h #hits esc + = 1) home/ 2) homeroot/ #hits 2 + = for the 'homeroot' dir 1) home/ 2) homeroot/ #hits just the '=' key. 1) home/ 2) homeroot/ Any ideas? I've su -'d to another user who can actually do it with their PuTTY session and I can't do it there, which makes me think it's a PuTTY configuration issue. This is running on a ksh93 shell on HP-UX, if that makes any difference. Here's my ksh config: /home/campbelm>set -o Current option settings allexport off bgnice on emacs off errexit off gmacs off ignoreeof off interactive on keyword off markdirs off monitor on noexec off noclobber off noglob off nolog off notify off nounset off privileged off restricted off trackall off verbose off vi on viraw on xtrace off /home/campbelm>

    Read the article

  • How can I store logs and meet compliance requirements for free?

    - by Martin
    I am trying to keep long-term logs of an app in such a way, that it could plausibly demonstrated to third parties/court that the application has processed certain data at a given time. The data can be represented in XML or text format. A simple gzipped log is not plausible, as I may have added or modified data afterwards, whereas an external logging service would be an overkill. Cost is an issue, we are not dealing with financial data or so, but rather some simple user generated content, where some malicious users tried to blame the operator in the past when things escalated and went to court. My question: Is there some kind of signing software for Linux that signs each element of a log in such a way, that it can be easily shown that no element can be added or modified afterwards? Plug-Ins into some free Splunk Alternatives would be fine too. Ideally the software I am looking for should be under a GPL or similar license. I could probably achive something like this by using PGP/GPG sgning functions and including the previous elements signituares within the following element, but I would prefer to use some program where you do not have to argue about the validity of your own code. Note to mods: I am not asking this question on Stackoverflow, because I am not looking for writing own code for reasons described above. I think this question rather fits into serverfault than superuser, as server-side logging software is discussed rather here than on superuser.

    Read the article

  • Powershell script to delete secondary SMTP addresses of Exchange 2010 Mail Contacts

    - by Zero Subnet
    I have a few thousand Exchange 2010 Mail Contacts who get erroneously assigned internal SMTP addresses by the default recipient policy. I'm trying to use the following command to delete these addresses (keeping the primary SMTP) and disabling the automatic update from recipient policy so the SMTP addresses don't get recreated again. Get-MailContact -OrganizationalUnit "domain.local/OU" -Filter {EmailAddresses -like *@domain.local -and name -notlike "ExchangeUM*"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $_; $email = $contact.emailaddresses; $email | foreach {if ($_.smtpaddress -like *@domain.local) {$address = $_.smtpaddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.identity -EmailAddresses @{Remove=$address}; $contact | set-mailcontact -emailaddresspolicyenabled $false} }} I'm getting the following error though: You must provide a value expression on the right-hand side of the '-like' operator. At line:1 char:312 + Get-MailContact -OrganizationalUnit "domain.local/testou" -Filter {EmailAddresses -like "@domain.local" -and name -notlike "ExchangeUM"} -ResultSize unlimited -IgnoreDefaultScope | foreach {$contact = $; $ email = $contact.emailaddresses; $email | foreach {if ($.smtpaddress -like <<<< *@domain.local) {$address = $_.smt paddress; write-host "Removing address" $address "from Contact" $contact.name; Set-Mailcontact -Identity $contact.ident ity -EmailAddresses @{Remove=$address}; $contact }} + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : ExpectedValueExpression Any help as to how to fix this?

    Read the article

  • Apache displays error page half way through PHP page execution

    - by Shep
    I've just installed Zend Server Community Edition on a Windows Server 2003 box, however there's a bit of a problem with the display of a lot of our PHP pages. The code has previously running under the same version of PHP (5.3) on IIS without any issues. By the looks of things, Apache (installed as part of Zend Server) is erroring out during the rendering of the page when it comes across something it doesn't like in the PHP. Going through the code, I've been able to get past some of the problems by removing the error suppression operator (@) from function calls and by changing the format of some includes. However, I can't do this for the whole site! Weirdly, the error code is reported as "200 OK". The code snippet below shows how the Apache error HTML interrupts the regular HTML of the page. <p>Ma quande lingues coalesce, li grammatica del resultant lingue es plu simplic e regulari quam ti del coalescent lingues.</<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>200 OK</title> </head><body> <h1>OK</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error.</p> <p>More information about this error may be available in the server error log The Apache error log doesn't offer any explanation for this, and I've exhausted my Googling skills, so any help would be greatly appreciated. Thanks.

    Read the article

  • Does SNI represent a privacy concern for my website visitors?

    - by pagliuca
    Firstly, I'm sorry for my bad English. I'm still learning it. Here it goes: When I host a single website per IP address, I can use "pure" SSL (without SNI), and the key exchange occurs before the user even tells me the hostname and path that he wants to retrieve. After the key exchange, all data can be securely exchanged. That said, if anybody happens to be sniffing the network, no confidential information is leaked* (see footnote). On the other hand, if I host multiple websites per IP address, I will probably use SNI, and therefore my website visitor needs to tell me the target hostname before I can provide him with the right certificate. In this case, someone sniffing his network can track all the website domains he is accessing. Are there any errors in my assumptions? If not, doesn't this represent a privacy concern, assuming the user is also using encrypted DNS? Footnote: I also realize that a sniffer could do a reverse lookup on the IP address and find out which websites were visited, but the hostname travelling in plaintext through the network cables seems to make keyword based domain blocking easier for censorship authorities.

    Read the article

  • Is there a local yubnub.org replacement?

    - by Justin Keogh
    I use yubnub very often... every google search I do by just (in firefox) "ctrl-t" - (now in the url bar) "y g searchterms" [Enter] "y" in this case is a search keyword I added by right clicking in the yubnub.org command box it's really fast, and I just do it automatically now... but the problem is now I am stuck with whatever the yubnub command that I am so used to using does. I cant change it... for example, what if I dont want to use google... but I still want to use the "g" command to search? or say I want to use google's https search... ect... I suppose this would be kinda trivial to implement locally... but I would hate to re-invent the code if it's allready done and in use... ideas? Also a local yubnub.org replacement would save me the DNS lookup and traffic to yubnub.org. I dont expect to be able to import all commands from yubnub.org but that would be cool if possible.

    Read the article

  • Is it better to always copy and delete, rather than move?

    - by nbolton
    Generally speaking, I find myself panicking when I realise that if I cancel a file move, it could cause the target or source to be incomplete. This question applies to Windows and Unix-based platforms. I can never remember exactly how the move command works in either case. For example, if you're moving a directory; does it copy the entire directory, then delete it after, or does it copy then delete each file individually? I always realise after typing something like, mv verybigdir dest that I really should have typed cp -R verybigdir dest && rm verybigdir (where the && operator only moves to the next command if the first was successful) -- or is this pointless? What happens exactly when I press Ctrl+C half way through a move? Likewise, what exactly happens on Windows when I press the cancel button? I can't count the number of times I've moved something (the last time was when using svn) and had two directories, with split contents. I guess the answer is difficult, because not all applications move groups of files in the same way.

    Read the article

  • Can Haproxy deny a request by IP if its stick-table is full?

    - by bantic
    In my haproxy configs I'm setting a stick-table of size 5 that stores every incoming IP address (for 1 minute), and it is set as nopurge so new entries won't get stored in the table. What I'd like to have happen is that they would get denied, but that isn't happening. The stick-table line is: stick-table type ip size 5 expire 1m nopurge store gpc0 And the whole configs are: global maxconn 30000 ulimit-n 65536 log 127.0.0.1 local0 log 127.0.0.1 local1 debug stats socket /var/run/haproxy.stat mode 600 level operator defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms backend fragile_backend tcp-request content track-sc2 src stick-table type ip size 5 expire 1m nopurge store gpc0 server fragile_backend1 A.B.C.D:80 frontend http_proxy bind *:80 mode http option forwardfor default_backend fragile_backend I have confirmed (connecting to haproxy's stats using socat readline /var/run/haproxy.stat) that the stick-table fills up with 5 IP addresses, but then every request after that from a new IP just goes straight through -- it isn't added to the stick-table, nothing is removed from the stick-table, and the request is not denied. What I'd like to do is deny the request if the stick-table is full. Is this possible? I'm using haproxy 1.5.

    Read the article

  • Allow members of a group to be unlocked by a specific account on AD

    - by JohnLBevan
    Background I'm creating a service to allow support staff to enable their firecall accounts out of hours (i.e. if there's an issue in the night and we can't get hold of someone with admin rights, another member of the support team can enable their personal firecall account on AD, which has previously been setup with admin rights). This service also logs a reason for the change, alerts key people, and a bunch of other bits to ensure that this change of access is audited / so we can ensure these temporary admin rights are used in the proper way. To do this I need the service account which my service runs under to have permissions to enable users on active directory. Ideally I'd like to lock this down so that the service account can only enable/disable users in a particular AD security group. Question How do you grant access to an account to enable/disable users who are members of a particular security group in AD? Backup Question If it's not possible to do this by security group, is there a suitable alternative? i.e. could it be done by OU, or would it be best to write a script to loop through all members of the security group and update the permissions on the objects (firecall accounts) themselves? Thanks in advance. Additional Tags (I don't yet have access to create new tags here, so listing below to help with keyword searches until it can be tagged & this bit editted/removed) DSACLS, DSACLS.EXE, FIRECALL, ACCOUNT, SECURITY-GROUP

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >