Search Results

Search found 8818 results on 353 pages for 'undefined behavior'.

Page 207/353 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • How do I access my webserver on my stationary from my laptop?

    - by Steven
    I'm running Apache on my stationary and I would like to access a website through my laptop. This is some of the Apache config: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerName mysite.com DocumentRoot I:/wamp/www/mysite/ </VirtualHost> ServerName localhost:80 <Directory /> Options FollowSymLinks AllowOverride all Order deny,allow Deny from all </Directory> On my laptop I've added the following to the HOSTS file: 10.0.0.3 mysite.com But accessing the page through mysite.com is not very successfull. If I enter the IP address directly, I only get a Forbidden message. What do I need to do in order to get this to work? Update I'm runing WAMPSERVER 2.1 (Apache 2.2.17) Apache is up and running I can ping 10.0.0.3 from laptop I'm not able to ping http://mysite.com from laptop IE gives me a 403 Forbidden - The website declined to show this webpage The only log that get's entries when trying to access the website from my laptop, is access.log. access.log 10.0.0.4 - - [13/Jun/2011:10:14:04 +0200] "GET / HTTP/1.1" 403 202 apache_error.log [Mon Jun 13 10:08:16 2011] [error] VirtualHost localhost:0 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results UPDATE 2 My apache config has the following entry: AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 Could it be that this Allow from is stopping other computers accessing the page?

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • Configuring PAM with pam_mount; getting a dlopen() with an HX_Init error

    - by Jamie
    I'm trying to get automounting upon login working on Ubuntu 10.03 Beta 2. I didn't find a package for pam_mount, so I ended downloading it and building it. This required: sudo apt-get install build-essential pkg-config libxml2-dev libssl-dev libpam-dev Additionally, the libHX-dev is required but as of yesterday (23/4/2010) the package version provided (3.2) wasn't up to snuff (3.4) so I downloaded, compiled and installed that too. cd ./pam_mount-1.36/ && ./configure && make && sudo make install When I tried it (pam_mount) I got this in my auth log: Apr 23 12:18:02 ubuntu sshd[1195]: PAM unable to dlopen(/lib/security/pam_mount.so): /lib/security/pam_mount.so: undefined symbol: HX_init Apr 23 12:18:02 ubuntu sshd[1195]: PAM adding faulty module: /lib/security/pam_mount.so Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.20.182 user=jrisk Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): getting password (0x00000388) Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): pam_get_item returned a password Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): user 'jrisk' granted access Apr 23 12:18:06 ubuntu sshd[1195]: Accepted password for jrisk from 192.168.20.182 port 4369 ssh2 Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:session): session opened for user jrisk by (uid=0) What do I need to do get HX_Init into the system? This is related to an answer I previously got here.

    Read the article

  • Dealing with upgrade of libevent on Amazon AWS

    - by Dreen
    I am building an application (in Python) on Amazon EC2 that has a following dependency chain: gevent-websocket ---> gevent ---> libevent The last one (libevent) got upgraded on Sunday and my server is now generating this error: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: libevent-1.4.so.2: cannot open shared object file: No such file or directory Not wanting to spend much time on the issue, I tried to mitigate it by creating a symlink to an always-recent version: $ sudo ln -s /usr/lib64/libevent.so /usr/lib64/libevent-1.4.so.2 But it didn't quite work: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: /usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/core.so: undefined symbol: current_base I am a bit stumped as to how to proceed. Should I create more symlinks? To what? Or is there a better way to solve this problem... PS. For the record I am using Amazon AMI.

    Read the article

  • Chrome will not load a web page with an <embed> element

    - by rossmcm
    I have been trying to get a simple sound web page going: Sound.html <script> function PlaySound () { var sounder = document.getElementById ("ToneA") ; sounder.Play () ; } </script> <embed id="ToneA" height="1" width="1" src="https://dl.dropboxusercontent.com/u/311035/ToneA.mp3" autostart="false" enablejavascript="true"//> <button onclick="PlaySound () ;">Play</button> The test web page is here. It plays in IE, but not in Firefox or Chrome. My problem: Chrome reports "Could not load VLC Plugin". It seems to be a known problem that the VLC community don't necessarily feel like fixing at the moment, and is a result of Google choosing not to allow some certain kind of plugin. If I disable the plugin I no longer get the message but nothing happens when I click the button. Looking at the console in a debug window I see Uncaught TypeError: undefined is not a function Sound.html:7 PlaySound Sound.html:7 onclick which suggests Chrome could not find anything else to handle the sound file. How to I tell Chrome to use (e.g.) Windows Media Player? * UPDATE * This is apparently because the VLC plugin is a NPAPI plugin and Chrome no longer supports these. I have uninstalled VLC and this has removed the error on loading the webpage with an embedded sound element, but it still doesn't invoke WMP instead.

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • AVAudioRecorder - Continue recording to file after user stops recording by leaving the application a

    - by Tegeril
    Can this be done? And if not, how far down towards Core Audio do I need to go (what method of recording should I be using instead)? I've noticed the behavior of AVAudioRecorder is to overwrite a file if it finds one at the path provided when you request that it record again, so I know that's not going to work. I'm also curious about file format restriction with this idea. Can you effectively resume an AAC or IMA4 encoding (the length of the files I want to record make WAV and probably even Apple Lossless prohibitive)? Thanks.

    Read the article

  • WPF TreeView databinding to hide/show expand/collapse icon

    - by Julian Lettner
    I implemented a WPF load-on-demand treeview like described in this (very good) article. In the mentioned solution a dummy element is used to preserve the expand + icon / treeview item behavior. The dummy item is replaced with real data, when the user clicks on the expander. I want to refine the model by adding a property public bool HasChildren { get { ... } } to my backing TreeNodeViewModel. Question: How can I bind this property to hide/show the expand icon (in XAML)? I am not able to find a suitable trigger/setter combination. (INotifyPropertyChanged is properly implemented.) Thanks for your time. Update 1: I want to use my property public bool HasChildren instead of using the dummy element. Determining whether or not an item has children is somewhat costly, but still much cheaper than fetching the children.

    Read the article

  • WPF ListView ScrollViewer Double-Click Event

    - by Sentax
    Doing the below will reproduce my problem: New WPF Project Add ListView Name the listview: x:Name="lvList" Add enough ListViewItems to the ListView to fill the list completely so a vertical scroll-bar appears during run-time. Put this code in the lvList.MouseDoubleClick event Debug.Print("Double-Click happened") Run the application Double-click on the LargeChange area of the scroll-bar (Not the scroll "bar" itself) Notice the Immediate window printing the double-click happened message for the ListView How do I change this behavior so MouseDoubleClick only happens when the mouse is "over" the ListViewItems and not when continually clicking the ScrollViewer to scroll down/up in the list?

    Read the article

  • The remote server returned an unexpected response: (400) Bad Request while streaming

    - by phenevo
    Hi, I have problem with streaming. When I send small file like 1kb txt everything is ok, but when I send larger file like 100 kb jpg or 2gb psd I get: The remote server returned an unexpected response: (400) Bad Request. I'm using windows 7, VS 2010 and .net 3.5 and WCF Service library I lost all my weekend on this and I still see this error :/ Please help me Client: var client = new WpfApplication1.ServiceReference1.Service1Client("WSHttpBinding_IService1"); client.GetString("test"); string filename = @"d:\test.jpg"; FileStream fs = new FileStream(filename, FileMode.Open); try { client.ProcessStreamFromClient(fs); } catch (Exception exception) { Console.WriteLine(exception); } app.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="StreamedHttp" closeTimeout="10:01:00" openTimeout="10:01:00" receiveTimeout="10:10:00" sendTimeout="10:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536000" maxBufferPoolSize="524288000" maxReceivedMessageSize="65536000" messageEncoding="Text" textEncoding="utf-8" transferMode="Streamed" useDefaultWebProxy="true"> <readerQuotas maxDepth="0" maxStringContentLength="0" maxArrayLength="0" maxBytesPerRead="0" maxNameTableCharCount="0" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary2/Service1/" binding="basicHttpBinding" bindingConfiguration="StreamedHttp" contract="ServiceReference1.IService1" name="WSHttpBinding_IService1" /> </client> </system.serviceModel> </configuration> And Wcf ServiceLibrary: public void ProcessStreamFromClient(Stream str) { using (var outStream = new FileStream(@"e:\test.jpg", FileMode.Create)) { var buffer = new byte[4096]; int count; while ((count = str.Read(buffer, 0, buffer.Length)) > 0) { outStream.Write(buffer, 0, count); } } } App.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="Binding1" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536000" transferMode="Streamed" bypassProxyOnLocal="false" closeTimeout="10:01:00" openTimeout="10:01:00" receiveTimeout="10:10:00" sendTimeout="10:01:00" maxBufferPoolSize="524288000" maxReceivedMessageSize="65536000" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <client /> <services> <service name="WcfServiceLibrary2.Service1"> <host> <baseAddresses> <add baseAddress="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary2/Service1/" /> </baseAddresses> </host> <!-- Service Endpoints --> <!-- Unless fully qualified, address is relative to base address supplied above --> <endpoint address="" binding="basicHttpBinding" contract="WcfServiceLibrary2.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <!-- Metadata Endpoints --> <!-- The Metadata Exchange endpoint is used by the service to describe itself to clients. --> <!-- This endpoint does not use a secure binding and should be secured or removed before deployment --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <dataContractSerializer maxItemsInObjectGraph="2147483647"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration>

    Read the article

  • MSBuild target _CopyWebApplication does not copy all necessary files to the bin folder

    - by apollodude217
    Elsewhere on the Web, you can find recommendations on using something like this to simulate the Publish feature in the VS 2005-2008 IDE from a command-line (I hope I did not goof up the syntax!): msbuild /t:ResolveReferences;_CopyWebApplication /p:BuildingProject=true;OutDir=C:\inetpub\wwwroot\ blah.csproj Now, it looks like the .dll's copy fine. However, there are certain configuration files and template files that are copied to the bin folder which are needed for the app to work. For example, an NHibernate configuration file shows up in blah.csproj as: <None Include="blah.cfg.xml"> <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> </None> While using Publish from within the IDE copies this file as it should, the aforementioned _CopyWebApplication target does not. I need this file to be copied in the build script. Is this desired behavior for _CopyWebApplication? Any recommendations on how to fix this?

    Read the article

  • How and where to implement basic authentication in Kibana 3

    - by Jabb
    I have put my elasticsearch server behind a Apache reverse proxy that provides basic authentication. Authenticating to Apache directly from the browser works fine. However, when I use Kibana 3 to access the server, I receive authentication errors. Obviously because no auth headers are sent along with Kibana's Ajax calls. I added the below to elastic-angular-client.js in the Kibana vendor directory to implement authentication quick and dirty. But for some reason it does not work. $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); What is the best approach and place to implement basic authentication in Kibana? /*! elastic.js - v1.1.1 - 2013-05-24 * https://github.com/fullscale/elastic.js * Copyright (c) 2013 FullScale Labs, LLC; Licensed MIT */ /*jshint browser:true */ /*global angular:true */ 'use strict'; /* Angular.js service wrapping the elastic.js API. This module can simply be injected into your angular controllers. */ angular.module('elasticjs.service', []) .factory('ejsResource', ['$http', function ($http) { return function (config) { var // use existing ejs object if it exists ejs = window.ejs || {}, /* results are returned as a promise */ promiseThen = function (httpPromise, successcb, errorcb) { return httpPromise.then(function (response) { (successcb || angular.noop)(response.data); return response.data; }, function (response) { (errorcb || angular.noop)(response.data); return response.data; }); }; // check if we have a config object // if not, we have the server url so // we convert it to a config object if (config !== Object(config)) { config = {server: config}; } // set url to empty string if it was not specified if (config.server == null) { config.server = ''; } /* implement the elastic.js client interface for angular */ ejs.client = { server: function (s) { if (s == null) { return config.server; } config.server = s; return this; }, post: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); console.log($http.defaults.headers); path = config.server + path; var reqConfig = {url: path, data: data, method: 'POST'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, get: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on get request, data will be request params var reqConfig = {url: path, params: data, method: 'GET'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, put: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'PUT'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, del: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; var reqConfig = {url: path, data: data, method: 'DELETE'}; return promiseThen($http(angular.extend(reqConfig, config)), successcb, errorcb); }, head: function (path, data, successcb, errorcb) { $http.defaults.headers.common.Authorization = 'Basic ' + Base64Encode('user:Password'); path = config.server + path; // no body on HEAD request, data will be request params var reqConfig = {url: path, params: data, method: 'HEAD'}; return $http(angular.extend(reqConfig, config)) .then(function (response) { (successcb || angular.noop)(response.headers()); return response.headers(); }, function (response) { (errorcb || angular.noop)(undefined); return undefined; }); } }; return ejs; }; }]); UPDATE 1: I implemented Matts suggestion. However, the server returns a weird response. It seems that the authorization header is not working. Could it have to do with the fact, that I am running Kibana on port 81 and elasticsearch on 8181? OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.46.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.46.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache This is the response HTTP/1.1 401 Authorization Required Date: Fri, 08 Nov 2013 23:47:02 GMT WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 UPDATE 2: Updated all instances with the modified headers in these Kibana files root@localhost:/var/www/kibana# grep -r 'ejsResource(' . ./src/app/controllers/dash.js: $scope.ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/querySrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/filterSrv.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); ./src/app/services/dashboard.js: var ejs = ejsResource({server: config.elasticsearch, headers: {'Access-Control-Request-Headers': 'Accept, Origin, Authorization', 'Authorization': 'Basic XXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='}}); And modified my vhost conf for the reverse proxy like this <VirtualHost *:8181> ProxyRequests Off ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/cake2.2.4/.htpasswd Require valid-user Header always set Access-Control-Allow-Methods "GET, POST, DELETE, OPTIONS, PUT" Header always set Access-Control-Allow-Headers "Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization" Header always set Access-Control-Allow-Credentials "true" Header always set Cache-Control "max-age=0" Header always set Access-Control-Allow-Origin * </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> Apache sends back the new response headers but the request header still seems to be wrong somewhere. Authentication just doesn't work. Request Headers OPTIONS /solar_vendor/_search HTTP/1.1 Host: 46.252.26.173:8181 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:25.0) Gecko/20100101 Firefox/25.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Origin: http://46.252.26.173:81 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization,content-type Connection: keep-alive Pragma: no-cache Cache-Control: no-cache Response Headers HTTP/1.1 401 Authorization Required Date: Sat, 09 Nov 2013 08:48:48 GMT Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS, PUT Access-Control-Allow-Headers: Content-Type, X-Requested-With, X-HTTP-Method-Override, Origin, Accept, Authorization Access-Control-Allow-Credentials: true Cache-Control: max-age=0 Access-Control-Allow-Origin: * WWW-Authenticate: Basic realm="Username/Password" Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 346 Connection: close Content-Type: text/html; charset=iso-8859-1 SOLUTION: After doing some more research, I found out that this is definitely a configuration issue with regard to CORS. There are quite a few posts available regarding that topic but it appears that in order to solve my problem, it would be necessary to to make some very granular configurations on apache and also make sure that the right stuff is sent from the browser. So I reconsidered the strategy and found a much simpler solution. Just modify the vhost reverse proxy config to move the elastisearch server AND kibana on the same http port. This also adds even better security to Kibana. This is what I did: <VirtualHost *:8181> ProxyRequests Off ProxyPass /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPassReverse /bigdatadesk/ http://127.0.0.1:81/bigdatadesk/src/ ProxyPass / http://127.0.0.1:9200/ ProxyPassReverse / https://127.0.0.1:9200/ <Location /> Order deny,allow Allow from all AuthType Basic AuthName “Username/Password” AuthUserFile /var/www/.htpasswd Require valid-user </Location> ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost>

    Read the article

  • Add an objective @property attribute in objective-c

    - by morticae
    Does anyone know of a way to add additional attribute types to the @property keyword without modifying the compiler? Or can anyone think of another way to genericize getter/setter creation? Basically, I have a lot of cases in a recent project where it's handy for objects to lazily instantiate their array properties. This is because we have "event" objects that can have a wide variety of collections as properties. Subclassing for particular events is undesirable because many properties are shared, and it would become a usability nightmare. For example, if I had an object with an array of songs, I'd write a getter like the following: - (NSMutableArray *)songs { if (!songs) { songs = [[NSMutableArray alloc] init]; } return songs; } Rather than writing dozens of these getters, it would be really nice to get the behavior via... @property (nonatomic, retain, lazyGetter) NSMutableArray *songs; Maybe some fancy tricks via #defines or something? Other ideas?

    Read the article

  • How does one close a Popup in Silverlight 3 by pressing the Escape key?

    - by Jacob
    I've just implemented a context menu control in Silverlight 3. One feature lacking in this control is for the Esc key to dismiss the menu. I've tried adding a KeyUp event handler in a few places, but the handler is never called. It looks like KeyUp is only available for items that can have focus. The popup menu cannot have focus, however, as it is only an ItemsControl. Have any of you successfully implemented having the Esc key close a Popup, or do you have any other suggestions on how I can implement this behavior?

    Read the article

  • Html.EditorFor, Html.DisplayFor not working on MVC1.0 -> MVC2.0 manual migration

    - by lawrence-chase
    Has anyone encountered this problem? I manually migrated a MVC1.0 application to MVC2.0 and everything so far seems to be working fine. Today I wanted to try out the Html.EditorFor helper and it doesn't render the template. I set it up the same way in a fresh MVC2.0 application and the template does render. Is there anything other (or specifically needed when mirgrating to activate this behavior) than throwing the partial view like DateTime.ascx into Views/Shared/EditorTemplates and using the helper methods to render the model objects? I'm stumped.

    Read the article

  • Space in search base OU causes error in Active Directory

    - by Jared Farrish
    Recently, while putting together some code to page Active Directory results beyond sizeLimit=1000, we ran into a strange behavior/bug of AD. Specifically, if we had an OU with a space in the search base, it caused an error: String base = "OU=Area X,OU=myserver,DC=my,DC=ad,DC=myserver,DC=com"; env.put(Context.PROVIDER_URL, "ldap://my.ad.myserver.com:389/" + base); This is the error we received: javax.naming.NamingException: [LDAP: error code 1 - 000020D6: SvcErr: DSID-031007DB, problem 5012 (DIR_ERROR), data 0 When we remove that OU, it works fine. What would cause this to occur? Do we need to encode the space somehow (+ and %20 only caused more issues)? Or is this generally illegal/unnecessary?

    Read the article

  • How to set ActiveModel::Base.include_root_in_json to false?

    - by Mark L
    I'm using Rails 3 w/ Mongoid, (so no ActiveRecord). Mongoid uses ActiveModel's "to_json" method, and by default that method includes the root object in the JSON (which I don't want). I've tried putting this in an initializer: ActiveModel::Base.include_root_in_json = false But get the error uninitialized constant ActiveModel::Base Any ideas how I can change this? I changed the default directly in the source-code and it worked fine, but obviously I'd like to do it properly. The variable is defined at the top of this file: Github - activemodel/lib/active_model/serializers/json.rb From the docs: "The option ActiveModel::Base.include_root_in_json controls the top-level behavior of to_json. It is true by default."

    Read the article

  • Selenium RC 403 Error - Forbidden for proxy

    - by FarmBoy
    I'm trying to run Selenium RC 1.0.3 using Java 6, JUnit 4, and Eclipse on Snow Leopard. Here is my test class, from the Selenium docs: public class TestCase extends SeleneseTestCase { @Before public void before() throws Exception { setUp("http://www.google.com/", "*firefox"); } @Test public void test() { selenium.open("/"); selenium.type("q", "selenium rc"); selenium.click("btnG"); selenium.waitForPageToLoad("30000"); assertTrue(selenium.isTextPresent("Advanced search")); } } I've tried (finding various suggestions on the web) replacing *firefox with *chrome or *firefox, replacing http with https and adding selenium.start(), but none have helped, or even changed the behavior. Any ideas?

    Read the article

  • Using AppFabric session state provider, does each session get its own region?

    - by goombaloon
    I've been playing around with AppFabric Beta 2's session state provider. It appears that each new session get its own region (named "Default_Region_XXXX" (where XXXX is an apparent random sequence of numbers). If I understand regions correctly, it appears that each region is tied to a single cluster host, leaving a single point of failure. Why is each session being given it own region? Also, do sessions eventually timeout and clean themselves up in the cache or is that behavior just inherited from the cache settings? I'm wondering (if in a production application scenario), if one would use a separate named cache for session state apart from other application caches? Thanks.

    Read the article

  • how to customize the listbox selected item style in silverlight 4

    - by Phani Kumar PV
    I am having a silverlight listbox in which a list item contains an image, its name and its price. the layout of the list item will be as follows: Under the image the image name will be shown, under the image name the price will be shown. Now the problem is when i select an list item all the three items(image, image name and its price ) are selected. this is the default behavior. Now the requirement is when i select a list item only the image should be selected. please let me know if there is a way to do this..

    Read the article

  • Set cursor position in a UITextField (Monotouch)

    - by manuel
    In a UITextView used for entering decimal numbers I would like to get rid of possible leading zeros (The behavior should be like the one of the Calculator App). Normally this would be rather easy to implement but I just cannot figure out how to restore the cursor position. UITextField has a property SelectedTextRange of type UITextRange which can be used to get the cursor position. However, there seems no easy way to get the current index nor to create a new UITextRange object that contains the new values. I could find the solution in objective c: Finding the cursor position in a UITextField It is however very unclear to me how to rewrite that in Monotouch. Any help with this would be highly appreciated. Thanks, Manuel

    Read the article

  • Wix, Launch application after installation complete, with UAC turned on.

    - by Christopher Roy
    Good day. I've been building an installer for our product using the WIX(Windows Installer XML) technology. The expected behavior is that the product is launched, if the check box is checked after installation. This has been working for some time now, but we found out recently that UAC of Win 7, and Vista is stopping the application from launching. I've done some research and it has been suggested to me that I should add the attributes Execute='deferred' and Impersonate='no'. Which I did, but then found out that to execute deferred, the CustomAction has to be performed, between the InstallInitialize, and IntallFinalize phases; which is not what I need. I need the product to launch AFTER install finalize, IF the launch checkbox is checked. Is there any other way to elevate permissions? Any and all answers, suggestions, or resonings will be appreciated. Cheers, Christopher Roy.

    Read the article

  • wcf - maximum array length quota

    - by dav.evans
    Im writing a small wcf/wpf app to resize images but wcf is giving me grief when I try to send an image of size 28K to my service from the client. The service works fine when I send it smaller images. I immediately assumed that this was a configuration issue and I've trawled the web looking at posts regarding the MaxArrayLength property in my binding configuration. Ive upped the limits on these settings on both the client and server to the maximum 2147483647 but still I get the following error: {"The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://mywebsite.com/services/servicecontracts/2009/01:OriginalImage. The InnerException message was 'There was an error deserializing the object of type System.Drawing.Image. The maximum array length quota (16384) has been exceeded while reading XML data. This quota may be increased by changing the MaxArrayLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader.'. Please see InnerException for more details."} Ive made my client and server configs the same and they look like the following: Server: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_ImageResizerServiceContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxConnections="10" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="LogoResizer.WCF.ServiceTypes.ImageResizerService" behaviorConfiguration="ServiceBehavior"> <host> <baseAddresses> <add baseAddress="http://localhost:900/mex/"/> <add baseAddress="net.tcp://localhost:9000/" /> </baseAddresses> </host> <endpoint binding="netTcpBinding" contract="LogoResizer.WCF.ServiceContracts.IImageResizerService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> </system.serviceModel> and my client config looks like: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_ImageResizerServiceContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxConnections="10" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:9000/" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_ImageResizerServiceContract" contract="ImageResizerService.ImageResizerServiceContract" name="NetTcpBinding_ImageResizerServiceContract"> <identity> <userPrincipalName value="[email protected]" /> </identity> </endpoint> </client> </system.serviceModel> It seems no matter what I set these values to I still get an error saying wcf cannot serialize my file because its greater than 16384. Any ideas? edit: the email address in the userPrincipalName tag has been altered for my privacy

    Read the article

  • ASP.NET AJAX ToolkitScriptManager Issue With Combining Scripts

    - by Kumar
    I have an ASP.NET 3.5 web application in which i am using the ToolkitScriptManager as below: <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" EnablePageMethods="true" ScriptMode="Release" LoadScriptsBeforeUI="false" runat="server" CombineScripts="false"> <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JavaScript/jquery-1.4.1.min.js" /> <asp:ScriptReference Path="~/JavaScript/Validators.js" /> </Scripts> </CompositeScript> </ajaxToolkit:ToolkitScriptManager> This works fine but from a performance standpoint this is not good as the pages are making a lot of requests to the webresources.axd and scriptresource.axd files. When I changed the CombineScripts property to true my ASP.NET AJAX control extenders are no longer working. What is the reason for this weired behavior and is there a fix for this?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >