Search Results

Search found 22641 results on 906 pages for 'case'.

Page 651/906 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • How to simulate Apache [END] flag on a redirect?

    - by Javier Méndez
    For business-specific reasons I created the following rewrite rule for Apache 2.2.22 (mod_rewrite): RewriteRule /site/(\d+)/([^/]+)\.html /site/$2/$1 [R=301,L] Which if given an URL like: http://www.mydomain.com/site/0999/document.html Is translated to: http://www.mydomain.com/site/document/0999.html That's the expected scenario. However, there are documents which name are only numbers. So consider the following case: http://www.mydomain.com/site/0055/0666.html Gets translated to: http://www.mydomain.com/site/0666/0055.html Which also matches my rewrite rule pattern, so I end up with "The web page resulted in too many redirects" errors from browsers. I have researched for a long time, and haven't found "good" solutions. Things I tried: Use the [END] flag. Unfortunately is not available on my Apache version nor it works with redirects. Use %{ENV:REDIRECT_STATUS} on a RewriteCond clause to end the rewrite process (L). For some reason %{ENV:REDIRECT_STATUS} is empty all the times I tried. Add a response header with the Header clause if my rule matches and then check for that header (see: here for details). Seems that a) REDIRECT_addHeader is empty b) headers are can't be set on the 301 response explicitly. There is another alternative. I could set a query parameter to the redirect URL which indicates it comes from a redirect, but I don't like that solution as it seems to hacky. Is there a way to do exactly what the [END] flag does but in older Apache versions? Such as mine 2.2.22. Thanks!

    Read the article

  • MAMP Pro mysqld won't start on os x lion

    - by Mike
    getting a Start MySQL Failed error in the GUI.. when i attempt to start mysqld from the CLI i get the following error: ? /Applications/MAMP/Library/bin/mysqld 120623 23:12:47 [Warning] Setting lower_case_table_names=2 because file system for /Applications/MAMP/db/mysql/ is case insensitive 120623 23:12:47 [Note] Plugin 'FEDERATED' is disabled. 120623 23:12:47 InnoDB: The InnoDB memory heap is disabled 120623 23:12:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120623 23:12:47 InnoDB: Compressed tables use zlib 1.2.3 120623 23:12:47 InnoDB: Initializing buffer pool, size = 128.0M 120623 23:12:47 InnoDB: Completed initialization of buffer pool 120623 23:12:47 InnoDB: highest supported file format is Barracuda. 120623 23:12:47 InnoDB: Waiting for the background threads to start 120623 23:12:48 InnoDB: 1.1.5 started; log sequence number 1595675 120623 23:12:48 [ERROR] /Applications/MAMP/Library/bin/mysqld: unknown option '--skip-locking' 120623 23:12:48 [ERROR] Aborting 120623 23:12:48 InnoDB: Starting shutdown... 120623 23:12:49 InnoDB: Shutdown completed; log sequence number 1595675 120623 23:12:49 [Note] /Applications/MAMP/Library/bin/mysqld: Shutdown complete i have deleted the mysql.pid file located at /application/mamp/tmp/mysql/mysql.pid and i still get the error above. I can't find where MAMP has set --skip-locking set, my.cnf doesnt have it anywhere. Activity monitor gives me a mysqld process running by me, and everytime i KILL the process both via Activity Monitor and via kill =9 pid it starts right back up.. Sampling the process points back to the MAMP mysqld.. wtf?! About to throw MAMP out the window and boot up a VM of CentOS =)

    Read the article

  • Wireshark WPA 4-way handshake

    - by cYrus
    From this wiki page: WPA and WPA2 use keys derived from an EAPOL handshake to encrypt traffic. Unless all four handshake packets are present for the session you're trying to decrypt, Wireshark won't be able to decrypt the traffic. You can use the display filter eapol to locate EAPOL packets in your capture. I've noticed that the decryption works with (1, 2, 4) too, but not with (1, 2, 3). As far as I know the first two packets are enough, at least for what concern unicast traffic. Can someone please explain exactly how does Wireshark deal with that, in other words why does only the former sequence work, given that the fourth packet is just an acknowledgement? Also, is it guaranteed that the (1, 2, 4) will always work when (1, 2, 3, 4) works? Test case This is the gzipped handshake (1, 2, 4) and an ecrypted ARP packet (SSID: SSID, password: password) in base64 encoding: H4sICEarjU8AA2hhbmRzaGFrZS5jYXAAu3J400ImBhYGGPj/n4GhHkhfXNHr37KQgWEqAwQzMAgx 6HkAKbFWzgUMhxgZGDiYrjIwKGUqcW5g4Ldd3rcFQn5IXbWKGaiso4+RmSH+H0MngwLUZMarj4Rn S8vInf5yfO7mgrMyr9g/Jpa9XVbRdaxH58v1fO3vDCQDkCNv7mFgWMsAwXBHMoEceQ3kSMZbDFDn ITk1gBnJkeX/GDkRjmyccfus4BKl75HC2cnW1eXrjExNf66uYz+VGLl+snrF7j2EnHQy3JjDKPb9 3fOd9zT0TmofYZC4K8YQ8IkR6JaAT0zIJMjxtWaMmCEMdvwNnI5PYEYJYSTHM5EegqhggYbFhgsJ 9gJXy42PMx9JzYKEcFkcG0MJULYE2ZEGrZwHIMnASwc1GSw4mmH1JCCNQYEF7C7tjasVT+0/J3LP gie59HFL+5RDIdmZ8rGMEldN5s668eb/tp8vQ+7OrT9jPj/B7425QIGJI3Pft72dLxav8BefvcGU 7+kfABxJX+SjAgAA Decode with: $ base64 -d | gunzip > handshake.cap Run tshark to see if it correctly decrypt the ARP packet: $ tshark -r handshake.cap -o wlan.enable_decryption:TRUE -o wlan.wep_key1:wpa-pwd:password:SSID It should print: 1 0.000000 D-Link_a7:8e:b4 - HonHaiPr_22:09:b0 EAPOL Key 2 0.006997 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 3 0.038137 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 4 0.376050 ZyxelCom_68:3a:e4 - HonHaiPr_22:09:b0 ARP 192.168.1.1 is at 00:a0:c5:68:3a:e4

    Read the article

  • How to view / enumerate / obtain a list of all effective rights / permissions on an Active Directory object?

    - by Laura
    I am new to Server Fault and was hoping to find an answer to a question that I have been struggling with for the past week or so. I have been recently asked by my management to furnish a list of all the effective rights / permissions delegated on the Active Directory object for our Domain Admins group. I initially figured I'd use the Effective Permissions Tab in Active Directory Users and Computers but had two problems with it. The first was that it doesn't seem very accurate and the second was that it requires me to enter the name of a specific user, and it only shows me what it figures are effective permissions for that user. Now, we have more than a 1000 users in our environment so there's no way I can possibly enter 1000 user names one by one. Plus, there is no way to export that information either. I also looked at dsacls from MS but it doesn't do effective permissions. Someone pointed me to a tool called ADUCAdmin but that seems to falsely claim to do effective permissions. Could someone kindly help me find a way to obtain this listing? Basically, I need to generate a list of all the modify effective permissions granted on the Domain Admins group object along with the list of all the admins to which these permissions are granted. In case it helps, I don't need a fancy listing - simple text / CSV output would be enough I would be grateful for any assistance since this is time and security sensitive for us.

    Read the article

  • I need a reverse proxy solution for SSH

    - by Bond
    Hi here is a situation I have a server in a corporate data center for a project. I have an SSH access to this machine at port 22.There are some virtual machines running on this server and then at the back of every thing many other Operating systems are working. Now Since I am behind the data centers firewall my supervisor asked me if I can do some thing by which I can give many people on Internet access to these virtual machines directly. I know if I were allowed to get traffic on port other than 22 then I can do a port forwarding. But since I am not allowed this so what can be a solution in this case. The people who would like to connect might be complete idiots.Who may be happy just by opening putty at their machines or may be even filezilla.I have configured an Apache Reverse Proxy for redirecting the Internet traffic to the virtual machines on these hosts.But I am not clear as for SSH what can I do.So is there some thing equivalent to an Apache Reverse Proxy which can do similar work for SSH in this situation. I do not have firewall in my hands or any port other than 22 open and in fact even if I request they wont allow to open.2 times SSH is not some thing that my supervisor wants.

    Read the article

  • how to remove background layer of djvu file

    - by Jon
    Hello, I've downloaded some files from the Internet Archive. They come in different file formats and most of the time I use pdf. However, sometimes the scans are saves in colour instead of b/w. This makes it difficult/impossible to read on a dedicated ebook reader. In that case I downloaded the djvu files as on the PC you can select which layer (color, bw,fore,back) one would like to see. Selecting the bw gives excellent results. However, the ebook reader does not has this option. The question is, how can I remove /extract a layer from the djvu file and save only this layer. So far I've tried the following two approaches: 1) select bw in the djvu viewer on the PC and printed to postscript file. Followed by a ps2pdf conversion. This works, but generates a fairly large pdf file. Sure, I can again upload it to any2djvu but it just seems to much manual work for each file. 2) I tried the shared annotation feature and said (mode bw). This works on the PC as desired but is ignored on the ebook reader as the other layers are still present. Any help or suggestions would be greatly appreciated.

    Read the article

  • iTunes Home Sharing only works one way between 2 Windows XP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the Windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

  • Missing drive space in Server 2003

    - by Tim Brigham
    I have two drives used for SQL backups which for the last week have been acting strange - the free space indicated by windows is far off from what windirstat, etc indicates. There should only be about 60 GB of drive space used and there is about 160. This would match the utilization if the two last backup files were still residing on disk. SQL server is 2000, OS Server 2003 x64. Running on a VMware 5.0 cluster. OSSEC and McAfee for this system shows clean. My current plan is to temporarily attach one of these drives this drive to another VM for analysis. Is there anything more I should be looking at? There were a lot of pages on the net when I was looking for documentation on this issue but I haven't found this case described. EDIT: Unfortunately even a full reboot did not clear this behavior. I also used process explorer to look for open file handles. No dice.

    Read the article

  • Random server lag, no CPU/mem/pagefile usage

    - by Kev
    We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server. No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.) I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • Exchange 2010 - Certificate error on internal Outlook 2013 connections

    - by Lorenz Meyer
    I have an Exchange 2010 and Outlook 2003. The exchange server has a wildcard SSL certificate installed *.domain.com, (for use with autodiscover.domain.com and mail.domain.com). The local fqdn of the Exchange server is exch.domain.local. With this configuration there is no problem. Now I started upgrading all Outlook 2003 to Outlook 2013, and I start to get consistently a certificate error in Outlook : The Name on the security certificate is invalid or does not match the name of the site I understand why I get that error: Outlook 2013 is connecting to exch.domain.local while the certificate is for *.domain.com. I was ready to buy a SAN (Subject Alternate Names) Certificate, that contains the three domains exch.domain.local, mail.domain.com, autodiscover.domain.com. But there is a hindrance: the certificate provider (in my case Godaddy) requires that the domain is validated as being our property. Now it is not possible for an internal domain that is not accessible from the internet. So this turns out not to be an option. Create self-signed SAN certificate with an Enterprise CA is an other option that is barely viable: There would be certificate error with every access to webmail, and I had to install the certificate on all Outlook clients. What is a recommended viable solution ? Is it possible to disable certificate checking in Outlook ? Or how could I change the Exchange server configuration so that the public domain name is used for all connections ? Or is there another solution I'm not thinking of ? Any advice is welcome.

    Read the article

  • When a server IP changes, do exising TCP (e.g. http/mysql) connections remain running

    - by Luke Cousins
    We have some PHP-FPM servers and when they need a database connection, they connect to an HAProxy server which selects them a database server to use and the connection opens. When we then want to perform some maintenance on the HAProxy servers (such as config changes requiring an HAProxy restart), the process is as follows: Shutdown Keepalived on the HAProxy server Wait for the floating IP to transfer to the backup HAProxy server (also running Keepalived) Wait until HAProxy stats is reporting just one connection (us checking how many connections there are) Restart HAProxy Restart Keepalived As step 2 occurs, what will happen to the open mysql connections at that point? According to this TCP Sessions and IP Changes question the connections will be dropped. Is this really the case? If so, what, if anything, can be done to prevent this happening? Can the connection be in some way forced to use the main (non-floating) IP of the server? We also have a similar setup with two Nginx servers with Keepalived running on them and we were planning on doing the equivalent process. If we do, the same question applies - what happens to the existing http connections when the IP moves to the other server? I appreciate your help.

    Read the article

  • RPC Server Unavailable on Hyper-V cluster when moving resources after the host adapter has failed

    - by Doug Luxem
    On a Windows 2008 R2 SP1 cluster running Hyper-V, a lost network connectivity on the primary host interface. The interface was rapidly flapping up and down, and this was later determined to be caused by a faulty switch port. As this was a clustered server, the host interface was not fault tolerant (seeing as how the whole server was fault tolerant), so connectivity to the host was going up and down. The Hyper-V guests were completely unaffected by the network outage as they used a dedicated trunk on the server separate from the host interface. Additionally, dedicated interfaces for the cluster and live migration networks were fine. In order to diagnose the server, I tried to move all resources (Hyper-V Guests) to other nodes through Failover Cluster Manager. These moves failed with an error RPC Server Unavailable. The only way to move resources was by shutting down the guests, stopping the cluster service on the Node A, allowing other nodes to take ownership of the resources, and restarting the guests. A few other notes: All nodes have Client for MS Networks and File & Printer Sharing enabled on the Cluster and LM networks. Node A was accessible over cluster and LM networks from other nodes (these are private, cluster-only networks); pingable, CIFs, etc. Accessing \\NODEA is done over the Host adapters, as you would expect in this case and is the reason for the RPC Server Unavailable error with that adapter being down. My questions here are - Is there a way to still use Live Migration in a failure scenario such as this to prevent shutting down the Hyper-V guests? How can the network be reconfigured in the future so that the cluster service attempts to use the cluster and/or live migration networks to issue the RPC requests?

    Read the article

  • no A record show in the answer section in dig results

    - by eric low
    To check the record for the domain, run dig with domain name as the parameter. dig example.com any I get the below result. Why there is no A record show in the result. What did i do wrong during the setup. Please advice what suppose to look into it. Hope everyone can help me to resolve the case asap. ; <<>> DiG 9.9.3-P2 <<>> example.com any ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44674 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 4, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN ANY ;; ANSWER SECTION: example.com. 3489 IN MX 100 biz.mail.com. example.com. 3482 IN NS ns1.domain.com. example.com. 3482 IN NS ns2.domain.com. ;; AUTHORITY SECTION: example.com. 3482 IN NS ns2.domain.com. example.com. 3482 IN NS ns1.domain.com. ;; Query time: 0 msec ;; SERVER: xxx.252.xxx.xxx#53(xxx.252.xxx.xxx) ;; WHEN: Wed Oct 30 04:48:34 CDT 2013 ;; MSG SIZE rcvd: 349

    Read the article

  • Ngins wont send POST to fastcgi backend, but GET works fine?

    - by xyld
    Not sure why, but it is happy sending a GET to the fastcgi backend (Mercurial hgwebdir in this case), but simply resorts to the filesystem if the request is a POST. Relevant parts of nginx.conf: location / { root /var/www/htdocs/; index index.html; autoindex on; } location /hg { fastcgi_pass unix:/var/run/hg-fastcgi.socket; include fastcgi_params; if ($request_uri ~ ^/hg([^?#]*)) { set $rewritten_uri $1; } limit_except GET { allow all; deny all; auth_basic "hg secured repos"; auth_basic_user_file /var/trac.htpasswd; } fastcgi_param SCRIPT_NAME "/hg"; fastcgi_param PATH_INFO $rewritten_uri; # for authentication fastcgi_param AUTH_USER $remote_user; fastcgi_param REMOTE_USER $remote_user; #fastcgi_pass_header Authorization; #fastcgi_intercept_errors on; } GET's work fine, but POST delivers this error to the error_log: 2010/05/17 14:12:27 [error] 18736#0: *1601 open() "/usr/html/hg/test" failed (2: No such file or directory), client: XX.XX.XX.XX, server: domain.com, request: "POST /hg/test HTTP/1.1", host: "domain.com" What could possibly be the issue? I'm trying to allow read-only access via GET's to the page, but require authorization when using hg push to the same url which sends a POST request.

    Read the article

  • Export-Mailbox - fails with large folders

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes? Update: What I've learnt so far, is that folder with more than approx 3000 messages will generate errors. If mail retention is set (default 30 days), Export-Mailbox will scan all messages whether these were deleted in previous runs or not, and date restriction to limit number of messages will not work. To avoid errors, I've switched off deleted message retention for the mailbox, and moved the messages from one large folder to multiple folders, and moved these one by one...

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • what is the reason behind window service stopped ,whether its due to LAN problems or any other issues

    - by Steve
    I have a windowservice which named Trunk which stopped one day i just want to know the reason behind it? this is an entry in the logs, Nov 15 17:54:04.318 :Trunk-1516:Trunk:handle_control_event:Received CTRL_LOGOFF_EVENT, ignore it Nov 25 15:54:52.157 :Trunk-1516:Trunk:ERROR - Process Restart Count (5) Exceeded for:C:\Program Files\secon\11.1.4\bin\vmd Nov 25 15:54:52.157 :Trunk-1516:Trunk:Stopping Trunk ... Nov 25 15:54:52.314 :Trunk-1516:Trunk:Shutting down, signaled C:\Program F Nov 20 15:54:20.345 :SCBridge.RegisterBridge:Exception in method: ScUtility.ScCommandException (0xa08990002): Exception from HRESULT: 0xa08990002 Supplemental Information: None available. at ScServer.ScServiceProcessorRegistryManager.Attach(String serviceProcessor, ScClientInformation clientInfo, FORCE_ATTACH_SPEC forceAttachToMaster) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor, Object clientInfo) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor) at ServerControlInterface.SCBridge.RegisterBridge(String SPName) for system APOLLOSP0 attempting to attach and register with the Bridge i had seen service is registered with specific account, so i thought that user logged off from the machine that may be the reason behind it or any LAN disconnection problem . But Having taken another look at the above entry we seem to have a constant failure being generated in vmd which causes Trunk to detect vmd requires a restart. Most of the time it works OK and the restart count is anything up to 4. In this case the Trunk log confirms that the Restart Count is 5 and so is considered to be exceeded. Presumably, this triggers the termination of the other services and Trunk is actually doing its job.So, coould this just be a timing issue and we need to increase the tolerance level (i.e restart count) or do we need to address the 0xa08990002 error in vmd?

    Read the article

  • Using the same Windows 8 Upgrade installer on multiple PCs

    - by Karan
    As per this article: You may transfer the software to another computer that belongs to you. … You may not transfer the software to share licenses between computers. But what if I have a bunch of PCs with a mix of XP/Vista/Windows 7? Can I purchase either the Windows 8 Pro Upgrade $40 (download only) or $70 (DVD) version (both of which come without a key) only once and use it to upgrade all the PCs? Since I'm not sharing the license and each PC has its own valid genuine license, it should be allowed, right, or is it illegal? Even if they want people to shell out $40/$70 for each PC, how would they enforce the use of the installer/media on only one PC each? EDIT: I have been given to believe by a source that the installer will only check for the previous OS' key, which is what is confusing me (I have never purchased an upgrade version before this, only full retail or pre-installed versions). Is this true or will I need to enter two keys to make the upgrade work, one for the previous version and then one for Windows 8? If the latter is the case, then the issue is solved since obviously the same Windows 8 key will not be valid for multiple PCs.

    Read the article

  • Bash shell prompt: where is $RET?

    - by Evgeni Sergeev
    I was reading this https://wiki.archlinux.org/index.php/Color_Bash_Prompt and ended up with the following: # Stores the status of each command in $RET PROMPT_COMMAND='RET=$?;' # A colour. RED_SHELL='\e[0;36m' # Prints "Status 1" if RET is 1, for example. RET_VISUALISE='$(if [[ $RET != 0 ]]; then echo -ne "Status \[$RED_SHELL\]$RET\n" && RET=0; fi;)' # What to print for each prompt. PS1="$RET_VISUALISE\[\e]0;\w\a\]\n\[\e[32m\]\u@\h \t \[\e[33m\]\w\[\e[0m\]\n\$ " This does almost what I want, except when I press Enter,Enter,Enter multiple times after a command that returned status != 0. In this case it prints "Status 1" every time I press Enter. This is what the && RET=0; part was supposed to get rid of. Also, I don't understand why env | grep RET only shows the PS1 contents. What is the scope of $RET ?

    Read the article

  • Docking Station Sound Doesn't Work on Dell D830 with Windows 7

    - by cisellis
    EDIT: I can only mark one answer as the correct one but the actual solution was a combination of two comments (updating the BIOS to A15 AND installing the Sigmatel audio drivers). I have a Dell Latitude D830 laptop that is running Windows 7 Enterprise x64. I connect to a docking station during the day with multiple monitors, a keyboard and a mouse. Everything runs with no problems including most of the docking station ports (usb, monitors, etc.) However, the sound port from the docking station does not work since the upgrade to Windows-7. Even with the laptop plugged in, the sound always comes out of the laptop, not the headphones plugged into the docking station. Here's what I've tried: I've seen other issues like via Google this that seem to be mostly unanswered. I found one or two that referenced using the Vista x64 drivers, especially the Nvidia drivers. I do not have an Nvidia chipset but I've reinstalled the sound drivers and that has not helped. I don't have a support contract and considering the cost is usually high to call Dell, that's not an option. Dell's forums are pretty much a wasteland and I've found no help there. Since this is a docking station I thought I might need to try the SATA or Intel chipset drivers from the dell site instead, however I'm not really sure and I need to work on this laptop in the meantime. I can't really afford the downtime to experiment with random drivers all day in case they turn out to be incompatible (Dell still hasn't added Windows 7 to their support site as far as I can tell). Does anyone have any other ideas? Has anyone had this issue and solved it? If so, how? Thanks in advance for your help.

    Read the article

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

  • How to secure an Internet-facing Elastic Search implementation in a shared hosting environment?

    - by casperOne
    (Originally asked on StackOverflow, and recommended that I move it here) I've been going over the documentation for Elastic Search and I'm a big fan and I'd like to use it to handle the search for my ASP.NET MVC app. That introduces a few interesting twists, however. If the ASP.NET MVC application was on a dedicated machine, it would be simple to spool up an instance of Elastic Search and use the TCP Transport to connect locally. However, I'm not on a dedicated machine for the ASP.NET MVC application, nor does it look like I'll move to one anytime soon. That leaves hosting Elastic Search on another machine (in the *NIX world) and I would probably go with shared hosting there. One of the biggest things lacking from Elastic Search, however, is the fact that it doesn't support HTTPS and basic authentication out of the box. If it did, then this question wouldn't exist; I'd simply host it somewhere and make sure to have an incredibly secure password and HTTPS enabled (possibly with a self-signed certificate). But that's not the case. That given, what is a good way to expose Elastic Search over the Internet in a secure way? Note, I'm looking for something that hopefully, will not require writing code to provide shims for the methods that I want (in other words, writing forwarders).

    Read the article

  • Glassfish v3 failure when startup. "Cannot allocate memory "

    - by Shisoft
    It is clear in this Question Fail to start Glassfish 3.1: java.io.IOException: error=12, Cannot allocate memory But in my case,I have a 512M memory Ubuntu 10.04 vps.It seems that I don't need to change any configure.But when start the server,I got this exception VM failed to start: java.io.IOException: Cannot run program "/usr/lib/jvm/java-6-sun-1.6.0.22/bin/java" (in directory "/home/glassfish/glassfish/domains/domain1/config"): java.io.IOException: error=12, Cannot allocate memory So,I set <jvm-options>-Xmx512</jvm-options> to <jvm-options>-Xmx400</jvm-options> The exception remains.What did I do something wrong? result of free -m total used free shared buffers cached Mem: 512 43 468 0 0 0 -/+ buffers/cache: 43 468 Swap: 0 0 0 result of cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 146049: kmemsize 2670652 5385253 51200000 51200000 0 lockedpages 0 8 2048 2048 0 privvmpages 11134 134522 131200 262200 4 shmpages 648 1352 128000 128000 0 dummy 0 0 0 0 0 numproc 12 73 500 500 0 physpages 6519 28162 0 200000000 0 vmguarpages 0 0 512000 512000 0 oomguarpages 6527 28169 512000 512000 0 numtcpsock 4 14 4096 4096 0 numflock 0 5 2048 2048 0 numpty 1 2 32 32 0 numsiginfo 0 3 1024 1024 0 tcpsndbuf 159600 265744 20480000 20480000 0 tcprcvbuf 65536 3590352 20480000 20480000 0 othersockbuf 44232 90640 20480000 20480000 0 dgramrcvbuf 0 12848 10240000 10240000 0 numothersock 22 31 2048 2048 0 dcachesize 0 0 10240000 10240000 0 numfile 1002 1474 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 24 24 2048 2048 0 Thanks

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >