Search Results

Search found 23797 results on 952 pages for 'css framework'.

Page 580/952 | < Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >

  • How to change mod_rewrite to avoid {REQUEST_FILENAME} in order to get around 255 character URL limit?

    - by Jeremy Reimer
    According to this answer: max length of url 257 characters for mod_rewrite? there is a maximum 255 character hard limit based on the file system for using mod_rewrite. According to the accepted answer, there are two solutions: Change the URL format of your application to a max of 255 characters between each slash. Move the Rewrite rules into the apache virtual host config and remove the REQUEST_FILENAME. I cannot use the first method, so I am trying to figure out the second. I have put the Rewrite rules into the Apache virtual host config as requested. However I cannot figure out how to remove the REQUEST_FILENAME and still have my web application framework (Dragonfly) still work. Here is the portion of the rewrite rules that I moved from .htaccess into the virtual host config file of Apache: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-f [OR] # if don't want Dragonfly to process html files comment # out the line below (you may need to remove the [OR] above too). RewriteCond %{REQUEST_FILENAME} \.(html|nl)$ # Main URL rewriting. RewriteRule (.*) index.cgi?$1 [L,QSA] I've tried removing {REQUEST_FILENAME} and it just breaks the framework in various ways. How do I rewrite this without using {REQUEST_FILENAME}?

    Read the article

  • How can I use the Homebrew Python with Homebrew MacVim on Mountain Lion?

    - by Stephen Jennings
    I originally asked and answered this question: How can I use the Homebrew Python version with Homebrew MacVim? These instructions worked on Snow Leopard using Xcode 4.0.1 and associated developer tools. However, they no longer seem to work on Mountain Lion with Xcode 4.4.1. My goal is to leave the system's version of Python completely untouched, and to only install PyPI packages into Homebrew's site-packages directory. I want to use the vim_bridge package in MacVim, so I need to compile MacVim against the Homebrew version of Python. I've edited the MacVim formula to add these to the arguments: --enable-pythoninterp=dynamic --with-python-config-dir=/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config Then I install with the command: brew install macvim --override-system-vim --custom-icons --with-cscope --with-lua However, it still seems to be somehow using Python 2.7.2 from the system. This seems strange to me because it also seems to be using the correct executable. :python print(sys.version) 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] :python print(sys.executable) /usr/local/bin/python $ /usr/local/bin/python --version Python 2.7.3 $ /usr/local/bin/python -c "import sys; print(sys.version)" 2.7.3 (default, Aug 12 2012, 21:17:22) [GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.60))] $ readlink /usr/local/lib/python2.7/config /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config I've removed everything in /usr/local and reinstalled Homebrew by running these commands: $ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) $ brew install git mercurial python ruby $ brew install macvim (nope, still broken) $ brew remove macvim $ ln -s /usr/local/Cellar/python/..../python2.7/config /usr/local/lib/python2.7/config $ brew install macvim

    Read the article

  • Need to find jni.h in cmake on Mac

    - by Ilan Tal
    I am trying to make VTK compile on a Mac Air machine. I am using CMake 2.8-9, using Xcode4 as the generator. If I press the Configure button with VTK_WRAP_JAVA not checked, it will go with no errors. However I definitely need to use the wrap java since my main program is in Java and I need to get to VTK which is c++. As soon as I check the wrap Java, I get Could NOT find JNI. It apparently is looking for jni.h which in Linux there is no problem finding, but in the Mac it apparently can't find it. I did a locate jni.h and got new-host-2:~ geraldkolodny$ locate jni.h /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/System/Library/Frameworks/JavaVM.framework/Versions/A/Headers/jni.h /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/System/Library/Frameworks/JavaVM.framework/Versions/A/Headers/jni.h /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/include/jni.h I tried to manually put into JAVA_INCLUDE_PATH2 either entry 2 or 3 (without the jni.h at the end), but it still can't find jni.h. Xcode used to have a template for jni but that is now gone in the latest version. I am fresh out of ideas on how to solve this problem. I'd be grateful for any suggestions. Thanks, Ilan

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Windows Task Scheduler fails on EventData instruction

    - by Pete
    The Scheduled Task fails on the Event Data instruction in this XML: <ValueQueries> <Value name="eventChannel">Event/System/Channel</Value> <Value name="eventRecordID">Event/System/EventRecordID</Value> <Value name="eventData">Event/EventData/Data</Value> </ValueQueries> The other 2 fields can be passed as arguments and the EventData syntax matches other websites, so I don't know why it's failing. This is the Event Viewer XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Aptify.ExceptionManagerPublishedException" /> <EventID Qualifiers="0">0</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2013-11-07T19:39:14.000000000Z" /> <EventRecordID>97555</EventRecordID> <Channel>Application</Channel> <Computer>[Computer Name]</Computer> <Security /> </System> <EventData> <Data>General Information ********************************************* Additional Info: ExceptionManager.MachineName: [Computer Name] ExceptionManager.TimeStamp: 11/7/2013 12:39:14 PM ExceptionManager.FullName: AptifyExceptionManagement, Version=4.0.0.0, Culture=neutral, PublicKeyToken=[key] ExceptionManager.AppDomainName: Aptify Shell.exe ExceptionManager.ThreadIdentity: ExceptionManager.WindowsIdentity: ACA_DOMAIN\pbassett 1) Exception Information ********************************************* Exception Type: Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntityValidationException Entity: Tasks ErrorString: Task Type "Make Contact" is not active. MachineName: [machine] CreatedDateTime: 11/7/2013 12:39:14 PM AppDomainName: Aptify Shell.exe ThreadIdentityName: WindowsIdentityName: [identity] Severity: 0 ErrorNumber: 0 Message: Task Type "Make Contact" is not active. Data: System.Collections.ListDictionaryInternal TargetSite: Boolean Save(Boolean, System.String ByRef, Sys tem.String) HelpLink: NULL Source: AptifyGenericEntity StackTrace Information ********************************************* at Aptify.Framework.BusinessLogic.GenericEntity.AptifyGenericEntity.Save(Boolean AllowGUI, String& ErrorString, String TransactionID)</Data> </EventData> </Event>

    Read the article

  • How to install RMagick RubyGem on Mac OS X 10.6 Snow Leopard?

    - by misbehavens
    I am getting this error while trying to install RMagick: $ sudo gem install rmagick Building native extensions. This could take a while... ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... no Can't install RMagick 2.13.1. Can't find Magick-config in /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin:~/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out How can I install the RMagick RubyGem on Snow Leopard?

    Read the article

  • .htaccess authorization requiring username/password for every resource

    - by webworm
    I am using Apache2 on Ubuntu and I have having some "weird" user authorization issues. I am using .htaccess to control access to my directories. I have many users and have grouped them into user groups which are defined in a "group" file. I then use .htaccess within each directory to define which users have access to the directory and which do not. Here is an example .htaccess file. AuthUserFile /var/local/.htpasswd AuthGroupFile /var/local/groups AuthName "Username and Password Required" AuthType Basic require group design admin Everything is working with one exception. I added a new user to one of my groups and though they can gain access to the directory they are prompted for a username and password for every resource (i.e. image, CSS). After a while I can just keep selecting "cancel" and I will get a page with just html with no images or CSS. I would think the browser would just cache the username/password. It seems to be working well for other users. Any thoughts?

    Read the article

  • .htaccess authorization requiring username/password for every resource

    - by webworm
    I am using Apache2 on Ubuntu and I have having some "weird" user authorization issues. I am using .htaccess to control access to my directories. I have many users and have grouped them into user groups which are defined in a "group" file. I then use .htaccess within each directory to define which users have access to the directory and which do not. Here is an example .htaccess file. AuthUserFile /var/local/.htpasswd AuthGroupFile /var/local/groups AuthName "Username and Password Required" AuthType Basic require group design admin Everything is working with one exception. I added a new user to one of my groups and though they can gain access to the directory they are prompted for a username and password for every resource (i.e. image, CSS). After a while I can just keep selecting "cancel" and I will get a page with just html with no images or CSS. I would think the browser would just cache the username/password. It seems to be working well for other users. Any thoughts?

    Read the article

  • Setting up subdomain to respond on :443 with apache2

    - by compucuke
    I read through some guides on this and I believe it is possible to have apache respond to a subdomain through ssl. I have domain.com responding on 80 and I do not need domain.com responding on 443. Rather, the only use I have for ssl is for the subdomain sub.domain.com. So my site should be http://domain.com http://www.domain.com https://sub.domain.com https://www.sub.domain.com My CNAME records are as follows sub.domain.com xxx.xx.xx.xxx *.sub.domain.com xxx.xx.xx.xxx The A record exists but should not matter for the example. I set up a separate config file in sites-enabled for sub.domain.com NameVirtualHost xxx.xx.xx.xxx:443 <VirtualHost xxx.xx.xx.xxx:443> SSLEngine on SSLStrictSNIVHostCheck on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM ServerAlias sub.domain.com DocumentRoot /usr/local/www/ssl/documents/ SSLCertificateFile /root/sub.domain.com.crt SSLCertificateKeyFile /root/sub.domain.com.key Alias /robots.txt /usr/local/www/ssl/documents/robots.txt Alias /favicon.ico /usr/local/www/ssl/documents/favicon.ico Alias /js/libs /usr/local/www/ssl/documents/js/libs Alias /media/ /usr/local/www/documents/media/ Alias /img/ /usr/local/www/ssl/documents/img/ Alias /css/ /usr/local/www/ssl/documents/css/ <Directory /usr/local/www/ssl/documents/> Order allow,deny Allow from all </Directory> WSGIDaemonProcess sub.domain.com processes=2 threads=7 display-name=%{GROUP} WSGIProcessGroup sub.domain.com WSGIScriptAlias / /usr/local/www/wsgi-scripts/script.wsgi <Directory /usr/local/www/wsgi-scripts> Order allow,deny Allow from all </Directory> </VirtualHost> Now, it is important to mention that https://domain.com responds with what I have running from script.wsgi above instead of on https://sub.domain.com. It does not respond to sub.domain.com. checking https://sub.domain.com causes a 105 error. This is a DNS error but I am convinced the DNS does not have a problem with the CNAME records, they just point to my IP. Am I doing something that Apache can not do?

    Read the article

  • Why is Safari on my computer rendering all of the colors -- not just for images -- incorrectly?

    - by richardhenry
    I’m not just talking about image color profile issues; every single color that the browser renders is incorrect. It’s like it’s in it’s own color space (or something!). Screenshot: http://drp.ly/DJk1O (Opera, Safari, Chrome, Firefox) Spot the odd one out? Open this up in Photoshop or similar and try using the eyedropper to select the colour. Safari renders the same hex color completely differently. That color is set using a background-color declaration in CSS, so it should be identical in all four of those browsers. Here’s the HTML I was using: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>untitled</title> <style type="text/css"> body { background-color: #114742; } </style> </head> <body> </body> </html> Literally every website I’m viewing with my install of Safari is displaying colors incorrectly. The blue of the bar on Facebook is slightly less rich. This doesn’t occur on any other Macs I’ve tried. Any idea what’s happened to my Safari install?

    Read the article

  • Losing JSESSIONID when using ProxyHTMLURLMap

    - by Matthew Schmitt
    I've setup a reverse proxy between an Apache front-end and multiple Tomcat backends. The below block of code includes the ProxyHTMLURLMap param so that the HTML can be rewritten to remove the Tomcat context path. With this setup in place, after logging into my application, an initial JSESSIONID is set properly, but when navigating to any other page, this JSESSIONID is lost and another one is set by the application. I should mention that the initial login directs to a URL that includes the current context path (i.e. https://app.domain.com/context/home), but when navigating to another page, that context path is not present in the URL (i.e. https://app.domain.com/page2). <Proxy balancer://happcluster> BalancerMember ajp://happ01.h.s.com:8009 route=worker1 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ02.h.s.com:8009 route=worker2 loadfactor=10 timeout=15 retry=5 BalancerMember ajp://happ03.h.s.com:8009 route=worker3 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ04.h.s.com:8009 route=worker4 loadfactor=5 timeout=15 retry=5 BalancerMember ajp://happ05.h.s.com:8009 route=worker5 loadfactor=5 timeout=15 retry=5 ProxySet lbmethod=bytraffic ProxySet stickysession=JSESSIONID </Proxy> ProxyPass /context balancer://happcluster/context ProxyPass / balancer://happcluster/context/ <Location /context/> # Rewrite HTTP headers and HTML/CSS links for everything else ProxyPassReverse / ProxyPassReverseCookieDomain / app.domain.com ProxyPassReverseCookiePath / /context ProxyHTMLURLMap /context/ / # Be prepared to rewrite the HTML/CSS files as they come back # from Tomcat SetOutputFilter INFLATE;proxy-html;DEFLATE </Location> Has anyone ever run into a similar situation?

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update [closed]

    - by andrewcameron
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Mac OS Leopard: SyncServer process constantly using 100% CPU

    - by macca1
    I am running Leopard that I upgraded from Tiger. I've been noticing that every once in a while the SyncServer process starts up and eats up all the CPU. The fans will start going at full blast and the laptop will slow down to a crawl. I need to force quit the process from Activity Monitor to get it under control. It disappears for a while, but eventually gets started again. I do have an iphone as well that I sync so I'm wondering if syncServer might be an apple process checking for my phone plugged in. Edit: Tried iSync and the manual resetsync as suggested, but got this output: Vince-2:~ vince$ /System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl full 2010-03-12 08:03:50.230 perl[176:10b] SyncServer is unavailable: exception when connecting: connection timeout: did not receive reply PerlObjCBridge: NSException raised while sending reallyResetSyncData to NSObject object name: "ISyncServerUnavailableException" reason: "Can't connect to the sync server: NSPortTimeoutException: connection timeout: did not receive reply ((null))" userInfo: "" location: "/System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl line 16" ** PerlObjCBridge: dying due to NSException Vince-2:~ vince$ And during that syncServer started spinning up 95-100% just like it always does.

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Yahoo marked my mail as spam and says domainkey fails

    - by mGreet
    Hi Yahoo is marking our mail as spam. We are using PHP Zend framework to send the mail. Mail header says that Domain Key is failed. Authentication-Results: mta160.mail.in.yahoo.com from=mydomain.com; domainkeys=fail (bad sig); from=mydomain.com; dkim=pass (ok) We configured our SMTP server (Same server used to send mail from zend framework.) in outlook and send the mail to yahoo. This time yahoo says domainkeys is pass. Authentication-Results: mta185.mail.in.yahoo.com from=speedgreet.com; domainkeys=pass (ok); from=speedgreet.com; dkim=pass (ok) Domainkey is added in mail header on our server which is used by both outlook client and PHP client. yahoo recognize the mail which is sent from outlook and yahoo does not recognize the mail from PHP client. As far as I know, Signing the email is done on the server side with help of domain key. PHP and Outlook uses the same server to sign the mail. But why yahoo handling differently? What I am missing here? Any Idea? Can anyone help me?

    Read the article

  • IIS and ASP.NET

    - by sam
    i'm trying to add asp.net feature on windows 7 i tried to turn it on using turn windows features on or off but it fails every time so i download web platform installer and try it that way and it fails also next i uninstall .net framework 4 restart again! and reinstall it and try again the previous steps but it fails the same i need this installed so i can view it on iis7 anyone know what i can do with this to get it working i've searched and searched and everything fails i get this error on the web platform installer Failed with 0x80070643 – Fatal Error during installation please help i cant do my work with out it working :( ok i did a few things now get this error Server Error in '/pulse' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'pulsesite.MvcApplication'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.vb" Inherits="pulsesite.MvcApplication" Language="VB" % Source File: /pulse/global.asax Line: 1 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 i know its ust about changing the code but i'm not good with c# anyone know how?

    Read the article

  • Firefox not displaying icons in KhanAcademy

    - by ADTC
    If you don't know what Khan Academy is, check it out. It's awesome. (For testing purpose you may view any video on the website.) My problem -- it's a minor problem, but annoying -- is that in Firefox (Windows 7), the icons below the video are shown as boxes with hex codes in them. This means the icons come from some font that isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I checked out the source and found that the font in question is called "FontAwesome". I found this question in S.O. that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. Also, would manually downloading the font and adding it to Windows' Fonts folder help? I tried this with the TTF font, and it does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Python module: Trouble Installing Bitarray 0.8.0 on Mac OSX 10.7.4

    - by Gabriele
    I'm new here! I have trouble installing bitarray (vers 0.8.0) on my Mac OSX 10.7.4. Thanks! ('gcc' does not seem to be the problem) Last login: Sun Sep 9 22:24:25 on ttys000 host-001:~ gabriele$ gcc -version i686-apple-darwin11-llvm-gcc-4.2: no input files host-001:~ gabriele$ Last login: Sun Sep 9 22:18:41 on ttys000 host-001:~ gabriele$ cd /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/bitarray-0.8.0/ host-001:bitarray-0.8.0 gabriele$ python2.7 setup.py installrunning install running bdist_egg running egg_info creating bitarray.egg-info writing bitarray.egg-info/PKG-INFO writing top-level names to bitarray.egg-info/top_level.txt writing dependency_links to bitarray.egg-info/dependency_links.txt writing manifest file 'bitarray.egg-info/SOURCES.txt' reading manifest file 'bitarray.egg-info/SOURCES.txt' writing manifest file 'bitarray.egg-info/SOURCES.txt' installing library code to build/bdist.macosx-10.6-intel/egg running install_lib running build_py creating build creating build/lib.macosx-10.6-intel-2.7 creating build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/__init__.py -> build/lib.macosx-10.6-intel-2.7/bitarray copying bitarray/test_bitarray.py -> build/lib.macosx-10.6-intel-2.7/bitarray running build_ext building 'bitarray._bitarray' extension creating build/temp.macosx-10.6-intel-2.7 creating build/temp.macosx-10.6-intel-2.7/bitarray gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c bitarray/_bitarray.c -o build/temp.macosx-10.6-intel-2.7/bitarray/_bitarray.o unable to execute gcc-4.2: No such file or directory error: command 'gcc-4.2' failed with exit status 1 host-001:bitarray-0.8.0 gabriele$

    Read the article

  • Firefox cannot render icons from Font Awesome webfont set

    - by ADTC
    In Firefox (Windows 7), icons and glyphs that are called from the Font Awesome package do not render properly. An example of this can be seen on the Khan Academy website. Below the video the icons are shown as boxes with hex codes in them. This means that it isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I found this question on Stack Overflow that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However, I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. I've already tried manually downloading the font and adding it to Windows' Fonts folder but this does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • HTTPS/HTTP redirects via .htaccess

    - by Winston
    I have a somehow complicated problem I am trying to solve. I've used the following .htaccess directive to enable some sort of Pretty URLs, and that worked fine. For example, http://myurl.com/shop would be redirected to http://myurl.com/index.php/shop, and that was well working (note that stuff such as myurl.com/css/mycss.css) does not get redirected: RewriteEngine on RewriteCond ${REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] But now, as I have introduced SSL to my webpage, I want the following behaviour: I basically want the above behaviour for all pages except admin.php and login.php. Requests to those two pages should be redirected to the HTTPS part, whereas all other requests should be processed as specified above. I have come up with the following .htaccess, but it does not work. h*tps://myurl.com/shop does not get redirected to h*tp://myurl.com/index.php/shop, and h*tp://myurl.com/admin.php does not get redirected to h*tps://myurl.com/admin.php. RewriteEngine on RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^(admin\.php$|login\.php$) RewriteRule ^(.*)$ http://%{HTTP_HOST}/${REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^(admin\.php$|login\.php$) RewriteRule ^(.*)$ https://myurl.com/%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] I know it has something to do with rules overwriting each other, but I am not sure since my knowledge of Apache is quite limited. How could I fix this apparently not that difficult problem, and how could I make my .htaccess more compact and elegant? Help is very much appreciated, thank you!

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

< Previous Page | 576 577 578 579 580 581 582 583 584 585 586 587  | Next Page >