Search Results

Search found 3536 results on 142 pages for 'vendor prefix'.

Page 99/142 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Why does my jQuery animation require an extra click in IE8 to finish?

    - by Eric Reynolds
    I am pretty new to jQuery in general, however the following code works perfectly in Chrome and Firefox, but not in IE8. In IE8, I have to click anywhere on the page to start the animation after selecting a radio button. Here is the code: $("input[name=method]").change(function() { if($("input:radio[name=method]:checked").val() == 'installer') { $('#download').slideUp(0).removeClass("vendorSize").text("Download").addClass("installerSize").slideDown(500); } else if($("input:radio[name=method]:checked").val() == 'url') { $('#download').slideUp(0).removeClass("installerSize").text("Download From Vendor Website").addClass("vendorSize").slideDown(500); } }); Anyone know why this breaks in IE8 but not in the other browsers? If you feel this would work better using .animate (not that I think it should matter), can you provide an example of how to code it? Thanks, Eric R

    Read the article

  • Server unreachable without www

    - by deamon
    My server is unreachable without "www." prefix, even when trying it with ping. The DNS entry looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011010600 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 1.2.3.4 localhost IN A 127.0.0.1 mail IN A 1.2.3.4 www IN A 1.2.3.4 ftp IN CNAME www imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ A DNS record of the same type for another domain on the same server is working with and without "www". And the VirualHost config looks like this: <VirtualHost *:80> ServerName somewhere.com ServerAlias www.somewhere.com ServerSignature Off ... </VirtualHost> Any idea what could be wrong?

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • Twitter Bootstrap Collapsible Navbar Duplicating

    - by sixeightzero
    I am working on a project using Twitter Bootstrap. One thing that I noticed is that my pages have duplicate navbars when they are defined as collapsible and the page is resized smaller. Here is the duplicate NavBar: Here is the normal width NavBar: Code: <!DOCTYPE html> <html lang="en"> <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> <!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title></title> <meta name="description" content=""> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="/assets/css/bootstrap.css"> <style> body { padding-top: 60px; } </style> <link rel="stylesheet" href="/assets/css/bootstrap-responsive.min.css"> <link rel="stylesheet" href="/assets/css/main.css"> <script>window.jQuery || document.write('<script src="/assets/js/vendor/jquery-1.8.1.min.js"><\/script>')</script> <script src="/assets/js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script> </head> <body class="dark"> <!--[if lt IE 9]> <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p> <![endif]--> <div class="navbar navbar-inverse navbar-fixed-top"> <div class="navbar-inner"> <div class="container"> <a class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse"> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </a> <a class="brand" href="#">Project name</a> <div class="nav-collapse collapse"> <ul class="nav"> <li class="active"><a href="#">Home</a></li> <li><a href="#about">About</a></li> <li><a href="#contact">Contact</a></li> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Dropdown <b class="caret"></b></a> <ul class="dropdown-menu"> <li><a href="#">Action</a></li> <li><a href="#">Another action</a></li> <li><a href="#">Something else here</a></li> <li class="divider"></li> <li class="nav-header">Nav header</li> <li><a href="#">Separated link</a></li> <li><a href="#">One more separated link</a></li> </ul> </li> </ul> </div><!--/.nav-collapse --> </div> </div> </div> Has anyone else run into this and have some pointers?

    Read the article

  • From interpeted to native code: "dynamic" languages compiler support

    - by Daniel
    First, I am aware that dynamic languages is a term used mainly by a vendor; I am using it just to have a container word to include languages like Perl (a favorite of mine), Python, Tcl, Ruby, PHP and so on. They are interpreted but I am interested here to refer to languages featuring strong capability to support the programmer efficiency and the support for typical constructs of modern interpreted languages My question is: there are dynamic languages can be compiled efficiently in native executable code - typically for Windows platforms? Which ones? Maybe using some third part ad-hoc tools? I am not talking about huge executables carrying with them a full interpreter or some similar tricks nor some smart module able to include its own dependances or some required modules, but a honest, straight, standard, solid executable code. If not, there is some technical reason inhibiting the availability of such a best-of-both-world feature? Thanks! Daniel

    Read the article

  • What should I know to begin Developing Applications with smart card

    - by Muhammad Nour
    I am using .Net 2.0 C# The Reader is ACR83 which can be found hxxp://www.acs.com.hk/index.php?pid=product&id=ACR83 and for the Card it self I am using ACOS3-32 also from the same company hxxp://www.acs.com.hk/index.php?pid=product&id=ACOS3 Also I have a .net wrapping for the local winscard api from the vendor SDK ok, this is my first time developing apps with smart card I need to Know what should I know to begin developing applications using smart card for now I need to use the smart card for authentication in a login process in a simple login form what should I put on the card and how should I read the contents from it also I need to encrypt the contents

    Read the article

  • How to Enable IPtables TRACE Target on Debian Squeeze (6)

    - by bernie
    I am trying to use the TRACE target of IPtables but I can't seem to get any trace information logged. I want to use what is described here: Debugger for Iptables. From the iptables man for TRACE: This target marks packes so that the kernel will log every rule which match the packets as those traverse the tables, chains, rules. (The ipt_LOG or ip6t_LOG module is required for the logging.) The packets are logged with the string prefix: "TRACE: tablename:chain- name:type:rulenum " where type can be "rule" for plain rule, "return" for implicit rule at the end of a user defined chain and "policy" for the policy of the built in chains. It can only be used in the raw table. I use the following rule: iptables -A PREROUTING -t raw -p tcp -j TRACE but nothing is appended either in /var/log/syslog or /var/log/kern.log! Is there another step missing? Am I looking in the wrong place? edit Even though I can't find log entries, the TRACE target seems to be set up correctly since the packet counters get incremented: # iptables -L -v -t raw Chain PREROUTING (policy ACCEPT 193 packets, 63701 bytes) pkts bytes target prot opt in out source destination 193 63701 TRACE tcp -- any any anywhere anywhere Chain OUTPUT (policy ACCEPT 178 packets, 65277 bytes) pkts bytes target prot opt in out source destination edit 2 The rule iptables -A PREROUTING -t raw -p tcp -j LOG does print packet information to /var/log/syslog... Why doesn't TRACE work?

    Read the article

  • Doctrine2 use of criteria inside the entity class

    - by Piotr Kowalczuk
    They try to write a method whose task would be to return only selected elements of the collection of items associated with a particular entity. /** * @ORM\OneToMany(targetEntity="PlayerStats", mappedBy="summoner") * @ORM\OrderBy({"player_stat_summary_type" = "ASC"}) */ protected $player_stats; public function getPlayerStatsBySummaryType($summary_type) { if ($this->player_stats->count() != 0) { $criteria = Criteria::create() ->where(Criteria::expr()->eq("player_stat_summary_type", $summary_type)); return $this->player_stats->matching($criteria)->first(); } return null; } but i get error: PHP Fatal error: Cannot access protected property Ranking\CoreBundle\Entity\PlayerStats::$player_stat_summary_type in /Users/piotrkowalczuk/Sites/lolranking/vendor/doctrine/common/lib/Doctrine/Common/Collections/Expr/ClosureExpressionVisitor.php on line 53 any idea how to fix this?

    Read the article

  • Cannot insert a breakpoint in shared Library

    - by ronan
    Friends While debugging an application of of the function is defined in a shared library which is written by another vendor . and I get an error like warning: Cannot insert breakpoint 0: in /opt/trims/uat/lib/libTIPS_Oleca.sl warning: This is because your shared libraries are not mapped private. To attach to a process and debug its shared libraries you must prepare the program with "/opt/langtools/bin/pxdb -s on a.out or "chatr +dbg enable a.out ".** warning: Add this to your Makefile for debug builds warning: so that each rebuilt debuggable a.out would warning: have this feature turned on. Temporarily disabling shared library breakpoints:0 Now the problem is I cannot modify the shared library . How do I resolve this error ? Many Thanks

    Read the article

  • ffmpeg conversion problem

    - by user33126
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • ffmpeg conversion problem

    - by Elamurugan
    installed ffmpeg and it shows version and all correctly. but even info ffmpeg command itself shows ffmpeg -i Alice_In_Wonderland.mp4 gives messgae like FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --extra-cflags=-fPIC --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:11:04, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 1 codec frame rate differs from container frame rate: 49.93 (9986/200) - 49.92 (599/12) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Alice_In_Wonderland.mp4': Duration: 00:01:39.65, start: 0.000000, bitrate: 542 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Stream #0.1(und): Video: h264, yuv420p, 480x270, 49.92 tbr, 24.96 tbn, 49.93 tbc At least one output file must be specified Please tell me whats the problem

    Read the article

  • svn track brand new code base

    - by Fire Crow
    I'm at a company, we keep recieviing new codebases from a third party vendor. we'd like to track the changes in subversion. is there a way to replace a branch with the new code and track the changes? currently we just delete all files in the branch, and then add the new files and commit. we'd like to track the files, but I havn't found a tool that will easily deal with all the .svn directories found in subfolders. does anyone know a tool that will replace an svn directory with a new branch and create the respective modify add and delete records as if the code base was organically modified?

    Read the article

  • CAN Controller DLL with Java Application. Unable to open CAN port.

    - by Joseph Lim
    I am creating a Java application that controls a Controller Area Network (CAN) controller via a vendor-supplied can.dll file. can.dll contains a function bool openPort(DWORD memAddr) that allows the application to establish connection with the CAN controller. I wrote a C++ test application, loaded can.dll via LoadLibrary and found this function to be working as it should, i.e. it returns true. However, in my Java application, calling this via JNI or JNA returns false. I hope someone can help me with this problem as I have been trying to fix this problem for more than a week. Thanks :) JL

    Read the article

  • SQL SERVER 2005 with Windows 7 Problems

    - by azamsharp
    First of all I restored the database from other server and now all the stored procedures are named as [azamsharp].[usp_getlatestposts]. I think [azamsharp] is prefixed since it was the user on the original server. Now, on my local machine this does not run. I don't want the [azamsharp] prefix with all the stored procedures. Also, when I right click on the Sproc I cannot even see the properties option. I am running the SQL SERVER 2005 on Windows 7. UPDATE: The weird thing is that if I access the production database from my machine I can see the properties option. So, there is really something wrong with Windows 7 security. UPDATE 2: When I ran the orphan users stored procedure it showed two users "azamsharp" and "dbo1". I fixed the "azamsharp" user but "dbo1" is not getting fixed. When I run the following script: exec sp_change_users_login 'update_one', 'dbo1', 'dbo1' I get the following error: Msg 15291, Level 16, State 1, Procedure sp_change_users_login, Line 131 Terminating this procedure. The Login name 'dbo1' is absent or invalid.

    Read the article

  • How to make sure web services are kept stable from one release to the next?

    - by Tor Hovland
    The company where I work is a software vendor with a suite of applications. There are also a number of web services, and of course they have to be kept stable even if the applications change. We haven't always succeeded with this, and sometimes a customer finds that a service is not behaving as before after upgrading. We now want to handle this better. In general, web services shouldn't change, and if they have to, at least we will know about it and document the change. But how do we ensure this? One idea is to compare the WSDL files with the previous versions at every release. That will make sure the interfaces don't change, but it won't detect that the behavior changes, for example if a bug is introduced in some common library. Another idea is to build up a suite of service tests, for example using soapUI. But then we'll never know if we have covered enough cases. What are some best practices regarding this?

    Read the article

  • Why is it necessary to chmod o+r parent directory to fix 403 access forbidden error with Nginx and P

    - by davenolan
    This may be an Nginx wrinkle, or it may be because I don't understand Unix permissions. We're using Hudson CI to deploy our staging instance. So RAILS_ROOT is /var/lib/hudson/jobs/JOBNAME/workspace. Hudson runs as hudson user Nginx runs as www-data user hudson and nginx are both members of the www group root of my nginx conf points to RAILS_ROOT/public as per normal. RAILS_ROOT/config/environment.rb is owned by www-data (so Passenger runs as www-data) RAILS_ROOT and everything in it is owned by the www group and group has r/w/x permissions As it stood, Nginx threw 403 permission denied when requesting any url. error.log contained entries like this: public/index.html" is forbidden (13: Permission denied). These did not fix the or change the error (each with a stop/start of Ngnix): chmod 777 -R RAILS_ROOT chgrp www -R /var/lib/hudson I also tried Nginx as root, and passenger complained that it could not find config/environment (despite the path displayed on the error page being correct). The fix was to ensure everybody has read permissions on each directory in the heirachy. In this case chmod o+r /var/lib/hudson. But if the group has read permissions on the directory, and nginx is a member of the owner group of the directory, why was it necessary to allow everyone read permissions? Is there something have not grokked about permissions? $nginx -V nginx version: nginx/0.7.61 built by gcc 4.4.1 (Ubuntu 4.4.1-4ubuntu8) configure arguments: --prefix=/opt/nginx --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-2.2.5/ext/nginx --with-http_ssl_module --with-pcre=~/src/pcre-8.00/ --with-http_stub_status_module $cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=9.10 DISTRIB_CODENAME=karmic DISTRIB_DESCRIPTION="Ubuntu 9.10"

    Read the article

  • PHP 5.4.9 Mysqli issue

    - by Vitaly
    On Ubuntu 12.04 server I had PHP 5.4.9 installed from the source: ./configure --prefix=/etc/php --with-apxs2=/etc/apache2/bin/apxs --with-config-file-path=/etc/php --with-config-file-scan-dir=/etc/php/conf.d --with-libxml-dir=/usr/local/libxml2 --with-xsl=/usr/local/libxslt --with-mysql --with-zlib --with-pdo-mysql --enable-calendar --with-gd --with-iconv-dir --enable-mbstring --enable-soap --enable-sockets --enable-zip --with-curl --with-openssl --with-kerberos --with-tidy' Then, using apt-get, I had mysql server and phpMyAdmin installed. Unfortunatelly phpMyAdmin keep saying that 'mysqli' and 'mcrypt' not installed. php -m | grep mysqli just confirms it. So I tried to install mysqli with "apt-get install php5-mysqli", but just got message to do it by means of "php5-mysqlnd" or "php5-mysql". Even though they are already installed (according to phpinfo()) I tried - doesn't work. However, in php.ini, there's mysqli staff like "extension=php_mysqli.dll", but no "extension=mysqli.so". And block [MySQLi] with some uncommented settings also present. Since this is my first attempt to build php from source I reckon I did some silly mistake. Any help is greatly appreciated.

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • MacOS creates a new mount on AFP path calls

    - by jAndy
    Hi Folks, following scenario: In my webapp, my customers are using Firefox as target browser. They have the need to open afp:// folders via Javascript. To make a long story short, this really works. You need to setup Firefox with about:config and set the value network.protocol-handler.external.afp to true. What happens then, the operating system (OSX) takes care of that path and it correctly opens a Finder window. The problem: OSX does create a new mount every time. It cannot distinct between afp://host/path/111 and afp://host/path/222 for instance. Furthermore, even if the afp path is 100% identical a new mount is created. It looks like this is the default behavior from OSX regardless of Firefox. So, is there any chance I can tell OSX not to create a new mount for some sub directorys which should get access over afp:// ? update: It looks like, there are OSX applications which can change the default behavior for network protocols. So you can change "somewhere" which application OSX should call for a protocol. If that is true, wouldn't it be possible to create a script which just opens the local path without a afp:// prefix ? The question here is, where is that configuration (?) to tell OSX which application to use for specific protocol. Any help welcome!

    Read the article

  • Applying ACLs to a Dovecot public namespace

    - by larsks
    I have a public namespace define in my dovecot (dovecot-2.0.9) configuration that looks like this: namespace { type = public separator = . prefix = news. location = maildir:/var/spool/news subscriptions = no } I would like to make all the mailboxes in this namespace read-only. I've got the following configuration for the ACL plugin: plugin { acl = vfile:/etc/dovecot/acls:cache_secs=300 } After perusing the documentation, it seemed as if I had a mailfolder /var/spool/news/.foo.bar that I could place the following into /var/spool/news/.foo.bar/dovecot-acl: anyone rl But that doesn't have any affect. I also tried creating a file /usr/local/etc/dovecot/acls/news.foo.bar with the same contents, but that didn't do anything, either. I've turned on mail debugging: mail_debug = yes But the log doesn't produce anything that appears to be relevant to ACL processing. I'm curious to know if anyone has gotten this to work correctly and if so if you could provide some configuration examples. Also, if there's any way to do this that doesn't involve per-mailbox configuration (.e.g, the ability to apply an ACL to news.* or something), that would be awesome. Getting the documented behavior for default ACLs working would be a step in the right direction.

    Read the article

  • Service nginx reload: unexpected error

    - by Anna
    I'm trying to install wordpress on my nginx server by following this tutorial: http://premium.wpmudev.org/blog/how-to-setup-your-own-nginx-powered-wordpress-server/ However, the last command at step 7 gave me a strange error: service nginx reload A copy-paste from my terminal: root@server:~# service nginx reload Reloading nginx configuration: nginx: [emerg] unexpected "o" in /etc/nginx/sites-enabled/wordpress:7 nginx: configuration file /etc/nginx/nginx.conf test failed When I nano into sites-enabled/wordpress, on the 7th line I can't find anything strange: <!DOCTYPE html> <html class=" "> <head prefix="og: http://ogp.me/ns# fb: http://ogp.me/ns/fb# object: http://ogp.me/ns/object# article: http://ogp.me/ns/article# profile: http://ogp.me/ns/profile#"> <meta charset='utf-8'> <meta http-equiv="X-UA-Compatible" content="IE=edge"> Also, I don't see any obvious errors in my nginx.conf file, but maybe I'm not checking something? The first couple of lines of the nginx config file: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } Any help is appreciated, thanks a lot in advance!

    Read the article

  • Java Persistence API

    - by Yatendra Goel
    I am new to Java Persistence API. I have just learnt it and now want to use it in my Java Desktop Application. But I have the following questions regarding it: Q1. Which JPA implementation is smallest in size (as I want to have my application's size as small as possible)? Q2. How to find the value of the <provider> tag in the persistence.xml file. I know that its value is vendor scpecific but I couldn't find the value for the JPA implementation downloaded from here.

    Read the article

  • Symfony2 : How to make the php_intl extension available for Symfony2?

    - by Miles M.
    I'm trying to follow this documentation on Symfony : http://symfony.com/doc/current/book/forms.html ok so here is my thing, I've externalised my form and created a specific form class for handling the process and being able to reuse it. So what happen when I submit the form, whatever the info are okay or not for my class, I get this fatal Error : Fatal error: Call to a member function setAttribute() on a non-object in C:\Program Files (x86)\wamp\www\QNetworks\vendor\symfony\src\Symfony\Component\Form\Extension\Core\DataTransformer\NumberToLocalizedStringTransformer.php on line 130 Call Stack I'm running with php 5.3.9 and my intl extension is installed and activated BUT when I run the app/check.php command I see : [[WARNING]] Checking that the intl extension is available: FAILED * Install and enable the intl extension (used for validators) * So I don't understand what the problem with this extension. Should I reinstall it ? When I go here : http://php.net/manual/en/intl.requirements.php I see tht i can install the PECL or the ICU library, but i don't know if I should and if there is any relation with my problem .. Thank for your help !!

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • Custom prerequsites to visual studio setup project

    - by Sandy
    I have a Visual Studio Setup project and have followed the steps mentioned in this link to load the Shared Add-in Support Update for the Microsoft .NET Framework 2.0 (KB908002) to the prerequisites list. The entry appears but there is this following warning shown No 'HomeSite' attribute has been provided for 'Shared Add-in Support Update for Microsoft .NET Framework 2.0 (KB908002)', so the package will be published to the same location as the bootstrapper. I use the Download component from the component vendor's website option. How do I set a homesite for this update so that the update is directly downloaded and installed. I do not want to distribute the update along with my setup. Thanks

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >