Search Results

Search found 14531 results on 582 pages for 'proxy pass'.

Page 166/582 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Toggle Android emulator network traffic from emulator invocation

    - by highphi
    I'm working on scripts to manage large amounts of Android emulators and I need to disable all network traffic on some of them. Because I'm doing all of this on a headless server, I cannot use the F8 hotkey described on the emulater documentation. I'm currently routing the TCP traffic through a null proxy with by using emulator-arm ... -http-proxy 0.0.0.0:0 and this blocks the traffic that I want it to. I thought this was working well until I noticed some strange error messages while running my scripts. The console started outputting accept too many open files and checking the open files with lsof reveals numerous messages stating "can't identify protocol" ... emulator- 19463 username 19u sock 0,6 0t0 1976595845 can't identify protocol emulator- 19463 username 20u sock 0,6 0t0 1976595847 can't identify protocol ... The only "solution" I found to this is to kill all of the emulators and then wait until this limit is reached again, which is hardly a solution at all. Is there another way to do this while invoking the emulator? Am I incorrectly using the -htt-proxy switch to block the traffic? Other people found solutions to block traffic by manually doing this by using airplane mode, but this isn't feasible for me as I'm controlling emulators via scripts. I could send keyevents to the emulator with my script and turn the phone on in airplane mode, but I would prefer something more reliable than this.

    Read the article

  • WCF object parameter loses values

    - by Josh
    I'm passing an object to a WCF service and wasn't getting anything back. I checked the variable as it gets passed to the method that actually does the work and noticed that none of the values are set on the object at that point. Here's the object: [DataContract] public class Section { [DataMember] public long SectionID { get; set; } [DataMember] public string Title { get; set; } [DataMember] public string Text { get; set; } [DataMember] public int Order { get; set; } } Here's the service code for the method: [OperationContract] public List<Section> LoadAllSections(Section s) { return SectionRepository.Instance().LoadAll(s); } The code that actually calls this method is this and is located in a Silverlight XAML file: SectionServiceClient proxy = new SectionServiceClient(); proxy.LoadAllSectionsCompleted += new EventHandler<LoadAllSectionsCompletedEventArgs>(proxy_LoadAllSectionsCompleted); Section s = new Section(); s.SectionID = 4; proxy.LoadAllSectionsAsync(s); When the code finally gets into the method LoadAllSections(Section s), the parameter's SectionID is not set. I stepped through the code and when it goes into the generated code that returns an IAsyncResult object, the object's properties are set. But when it actually calls the method, LoadAllSections, the parameter received is completely blank. Is there something I have to set to make the proeprty stick between method calls?

    Read the article

  • How to configure outgoing connections from an SQL stored procedure?

    - by Peter Vestberg
    I am working on a .NET project which uses Microsoft SQL server. In this project, I need a CLR stored procedure (written in C#) that uses a remote web service. So, when the stored procedure is executed on the SQL server, it makes web service calls and thus sends packets to a remote location. The problem is that when executing the SP I get: "System.Net.WebException: The request failed with HTTP status 403: Forbidden." The database user has full permission, the deployed CLR assembly and SP are even marked "unsafe", I tried signing it etc., so any of that is not causing the problem. When I am executing the very same C# code, but from a simple console application instead of as a SP, it all works fine. So I started to suspect a network related problem and had a packet sniffer running when executing both the SP and the console app version. What I realized was that the packets sent out had different destination IP addresses: the console app sent the packets directly to the web service IP while the SP sent the packets to a proxy server we use in our company. Due to network policies the latter is not allowed and that explains the "403 Forbidden" exception. So my question boils down to this: How can I configure the SP/MS SQL server to NOT use that proxy? I want it to send the packets directly to the web service IP, just like the test console app. (again, the C# code is the same , so it's not a programming matter). I've disabled all proxy settings in Internet Explorer in case the SQL server inherits these settings or something. However, no luck. Any help would be greatly appreciated! Best regards, Peter

    Read the article

  • How to parse rss from a php page, using jQuery/jFeed?

    - by ricebowl
    I'm trying to fumble my way through parsing rss sensibly, using jQuery and jFeed. Because of the same origin policy I'm pulling the BBC's health news feed into a local page (http://www.davidrhysthomas.co.uk/play/proxy.php). Originally this was just the same proxy.php script as available in the jFeed download package, but due to my host's disabling allow_url_fopen() I've amended the php to the following: $url = "http://newsrss.bbc.co.uk/rss/newsonline_uk_edition/health/rss.xml"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $data = curl_exec($ch); echo "$data"; curl_close($ch); Which seems to generate the same/comparable contents as the original fopen on my local machine. Now that seems to be working, I'm looking at setting the jFeed script up to work with the page and, to my embarrassment, don't see how. I understand that, at the least, this should work: jQuery.getFeed({ url: 'http://www.davidrhysthomas.co.uk/play/proxy.php', success: function(feed) { alert(feed.title); } }); ...but, as I'm sure you anticipate, it doesn't. What non-output there is, is available for your perusal here: http://www.davidrhysthomas.co.uk/play/exampleTest.html. And I honestly don't have a clue what to do about it. If anyone could offer some pointers, tips, hints, or, at a pinch, a quick slap around the cheeks and a 'pull yourself together!' it'd be much appreciated... Thanks in advance =)

    Read the article

  • Silverlight Async Design Pattern Issue

    - by Mike Mengell
    I'm in the middle of a Silverlight application and I have a function which needs to call a webservice and using the result complete the rest of the function. My issue is that I would have normally done a synchronous web service call got the result and using that carried on with the function. As Silverlight doesn't support synchronous web service calls without additional custom classes to mimic it, I figure it would be best to go with the flow of async rather than fight it. So my question relates around whats the best design pattern for working with async calls in program flow. In the following example I want to use the myFunction TypeId parameter depending on the return value of the web service call. But I don't want to call the web service until this function is called. How can I alter my code design to allow for the async call? string _myPath; bool myFunction(Guid TypeId) { WS_WebService1.WS_WebService1SoapClient proxy = new WS_WebService1.WS_WebService1SoapClient(); proxy.GetPathByTypeIdCompleted += new System.EventHandler<WS_WebService1.GetPathByTypeIdCompleted>(proxy_GetPathByTypeIdCompleted); proxy.GetPathByTypeIdAsync(TypeId); // Get return value if (myPath == "\\Server1") { //Use the TypeId parameter in here } } void proxy_GetPathByTypeIdCompleted(object sender, WS_WebService1.GetPathByTypeIdCompletedEventArgs e) { string server = e.Result.Server; myPath = '\\' + server; } Thanks in advance, Mike

    Read the article

  • Adding a new target type to msbuild: How do I refer to the itemname in the task rules?

    - by jmucchiello
    I'm trying to add a task to build the COM proxy DLL after building the main DLL. So I created the following in a .target file: <Target Name="ProxyDLL" Inputs="$(IntDir)%(WHATGOESHERE)_i.c;$(IntDir)dlldata.c" Outputs="$(OutDir)%(WHATGOESHERE)ps.dll" AfterTargets="Link"> <CL Sources="$(IntDir)%(WHATGOESHERE)_i.c;$(IntDir)dlldata.c" /> </Target> And reference it from the .vcxproj file as <ItemGroup> <ProxyDLL Include="FTAccountant" /> </ItemGroup> So the FTAccountant.DLL file is created through the normal build process and then when attempts to compile the proxy stubs it creates these command lines: cl /c dir\_i.c dir\dlldata.c And of course it can't find _i.c. The first attempt, I put %(Filename) in the WHATGOESHERE space and I got this error: C:\ActivePay\Build\Proxy DLL.targets(6,3): error MSB4095: The item metadata %(Filename) is being referenced without an item name. Specify the item name by using %(itemname.Filename). So I changed it to %(itemname.Filename) and that is an empty string. How to get the value specified in the task's Include attribute and use it within the task?

    Read the article

  • IMAP protocol support in different email servers

    - by raticulin
    Having to interact with several different email servers via IMAP (using javamail), I have found that there is a very different level of support for IMAP features among them. The lack of support of some features has resulted in more developing time, more complicated code to deal with different support, worse perforamance due to not being able to SEARCH etc. So I would like to get some info on other servers and what level of support they provide. So far I have dealt with Lotus Domino and Novell GroupWise (and to a lesser extend Exchange 2003 and 2007). I am particularly interested in most used one in unix/linux (Courier, Cyrus, Dovecot, UW IMAP) and also Zimbra, but feel free to add any you know. Also welcomed info about online services like gmail. Features that I consider (comment if you are interested in others and I'll add them. Custom flags Searching Custom flags Searching arbitrary headers Partial fetching Proxy authentication And what I have found so far (correct if I am wrong anywhere): Lotus Domino Custom flags yes Searching Custom flags yes Searching arbitrary headers yes Partial fetching ? Proxy authentication sort of, you can give some user permissions to access other users mailboxes and he will see them under his '\Other Users' folder Novell GroupWise Custom flags No Searching Custom flags No Searching arbitrary headers No Partial fetching ? Proxy authentication yes, you can use what is called a Trusted Application

    Read the article

  • VirtualHosting doesn't work. Logs me in through previous session

    - by Pablo
    When I log in with one browser session, I have to log in, but when I open another session it has automatically logged me in (as if I've picked up session 1), this does not happen if I use http://192.168.0.9:9070 It forces me to log in each time. So I know the application is working, it's just the proxy server that seems to apply the loging to each session (from http://icerap.limeo.com). # ************************************************************************ # Start of My stuff <<<------------------------------------------------------ # ************************************************************************ #<Proxy *> #Order Deny,Allow #Deny from all #Allow from 192.168.0 #</Proxy> # blog <VirtualHost *:80> ServerName icerap.limeo.com ProxyPass / http://192.168.0.9:9070/ ProxyPassReverse / http://192.168.0.9:9070/ </VirtualHost> # www <VirtualHost *:80> ServerName helpdesk.limeo.com ProxyPass / http://192.168.0.9:9055/ ProxyPassReverse / http://192.168.0.9:9055/ </VirtualHost> # blog <VirtualHost *:80> ServerName IceCake.limeo.com ProxyPass / http://192.168.0.9:9000/ ProxyPassReverse / http://192.168.0.9:9000/ </VirtualHost> # End of Limeo stuff <<<------------------------------------------------------ # ************************************************************************

    Read the article

  • Tortoise svn Subversion Update Error

    - by Boushley
    Hey All, I recently was working on an open source project... Everything was going great for a week or two but them something happened and I don't know what, and I can't update anymore! I know the url is correct, because I can check it out on my linux server... but when I try to check it out with tortoise svn on my windows box it doesn't work. The error message I'm getting is this OPTIONS of 'http://opensource.adobe.com/svn/opensource/flex/sdk/branches': 200 OK (http://opensource.adobe.com) Does anyone know what that means. The 200 OK part seems odd to me... it connected to the server but wasn't able to get the code? And what does OPTIONS of... mean? I've looked around, and some people were having proxy issues... but i'm not behind a proxy, and I made sure that tortoise svn is not trying to use a proxy. If anyone could help, that would be great! Boushley

    Read the article

  • Http authentication with apache httpcomponents

    - by matdan
    Hi, I am trying to develop a java http client with apache httpcomponents 4.0.1. This client calls the page "https://myHost/myPage". This page is protected on the server by a JNDIRealm with a login form authentication, so when I try to get https://myHost/myPage I get a login page. I tried to bypass it unsuccessfully with the following code : //I set my proxy HttpHost proxy = new HttpHost("myProxyHost", myProxyPort); //I add supported schemes SchemeRegistry supportedSchemes = new SchemeRegistry(); supportedSchemes.register(new Scheme("http", PlainSocketFactory .getSocketFactory(), 80)); supportedSchemes.register(new Scheme("https", SSLSocketFactory .getSocketFactory(), 443)); // prepare parameters HttpParams params = new BasicHttpParams(); HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1); HttpProtocolParams.setContentCharset(params, "UTF-8"); HttpProtocolParams.setUseExpectContinue(params, true); ClientConnectionManager ccm = new ThreadSafeClientConnManager(params, supportedSchemes); DefaultHttpClient httpclient = new DefaultHttpClient(ccm, params); httpclient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy); //I add my authentication information httpclient.getCredentialsProvider().setCredentials( new AuthScope("myHost/myPage", 443), new UsernamePasswordCredentials("username", "password")); HttpHost host = new HttpHost("myHost", 443, "https"); HttpGet req = new HttpGet("/myPage"); //show the page ResponseHandler<String> responseHandler = new BasicResponseHandler(); String rsp = httpClient.execute(host, req, responseHandler); System.out.println(rsp); When I run this code, I always get the login page, not myPage. How can I apply my credential parameters to avoid this login form? Any help would be fantastic

    Read the article

  • Jetty 7 will not allow me to customize a session cookie path

    - by Bob Obringer
    Using Jetty 7.0.2, I am unable to set a custom session cookie path. I am hosting multiple sites on the same server using apache to proxy requests to the proper context. (replaced http as htp as stackoverflow thinks my multiple links might be spam) <VirtualHost *:80> ServerName context.domain.com ProxyRequests On ProxyPreserveHost Off <Proxy *:80> Order deny,allow Allow from 127.0.0.1 </Proxy> ProxyPass / htp://localhost:8080/context/ ProxyPassReverse / htp://localhost:8080/context/ <Location /> Order allow,deny Allow from all </Location> </VirtualHost> Jetty is running on the same server on port 8080 and my context is available @ /context The user accesses the application @ htp://context.domain.com but jetty is setting the path for the session cookie @ /context. This prevents the browser from accessing the cookie since the the actual path to the context is not being used. I need to override Jetty's default setting to set the cookie for the context, and set the path at the root ( / ). In my Jetty's webdefault.xml I have the following, which is partially working: <context-param> <param-name>org.eclipse.jetty.servlet.SessionCookie</param-name> <param-value>CustomCookieName</param-value> </context-param> <context-param> <param-name>org.eclipse.jetty.servlet.SessionPath</param-name> <param-value>/</param-value> </context-param> The cookie is properly set with a custom name, but it is NOT setting the SessionPath. No matter what I set the value to... it refuses to set a cookie at any path but /context. This has been driving me crazy so any help would be greatly appreciated.

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • Type of member is not CLS-compliant

    - by John Galt
    Using Visual Studio 2008 and VB.Net: I have a working web app that uses an ASMX web service which is compiled into its separate assembly. I have another class library project compiled as a separate assembly that serves as a proxy to this web service. This all seems to work at runtime but I am getting this warning at compile time which I don't understand and would like to fix: Type of member 'wsZipeee' is not CLS-compliant I have dozens of webforms in the main project that reference the proxy class with no compile time complaints as this snippet shows: Imports System.Data Partial Class frmZipeee Inherits System.Web.UI.Page Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load And yet I have one webform (also in the root of the main project) which gives me the "not CLS-compliant" message but yet attempts to reference the proxy class just like the other ASPX files. I get the compile time warning on the line annoted by me with 'ERROR here.. Imports System.Data Partial Class frmHome Inherits System.Web.UI.Page Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee ERROR here Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load This makes no sense to me. The file with the warning is called frmHome.aspx.vb; all others in the project declare things the same way and have no warning. BTW, the webservice itself returns standard datatypes: integer, string, and dataset.

    Read the article

  • Consuming a WCF Service

    - by Lijo
    Hi I created a WCF service which is hosted in windows service. I created a proxy using svcutil “svcutil.exe http://localhost:8000/ServiceModelSamples/FreeServiceWorld?wsdl” It generated an output.config file and proxy class. The output.config has the following element <client> <endpoint address="http://localhost:8000/ServiceModelSamples/FreeServiceWorld" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IWeather" contract="IWeather" name="WSHttpBinding_IWeather"> <identity> <servicePrincipalName value="host/D471DTRV.ustr.com" /> </identity> </endpoint> </client> I created a website (as client) and added a new C# file (MyFile.cs) into it. I copied the contents of the proxy class into MyFile.cs. [The output.config is not copied to the web site] In the code behnid of aspx, I am using the following code WeatherClient client= new WeatherClient("WSHttpBinding_IWeather"); It throws an exception as “Could not find endpoint element with name 'WSHttpBinding_IWeather' and contract 'IWeather' in the ServiceModel client configuration section.” Could you please help me to understand the missing link here? Thanks Lijo

    Read the article

  • .NET MVC What is the best way to disable browser caching?

    - by Chameera Dedduwage
    As far as my research goes, there are several steps in order to make sure that browser caching is disabled. These HTTP headers must be set: Cache-Control: no-cache, no-store, must-revalidate, proxy-revalidate Pragma: no-cache, no-store Expires: -1 Last-Modified: -1 I have found out that this can be done in two ways: Way One: use the web.config file <add name="Cache-Control" value="no-store, no-cache, must-revalidate, proxy-revalidate"/> <add name="Pragma" value="no-cache, no-store" /> <add name="Expires" value="-1" /> <add name="Last-Modified" value="-1" /> Way Two: use the meta tags in _Layout.cshtml <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate, proxy-revalidate" /> <meta http-equiv="Pragma" content="no-cache, no-store" /> <meta http-equiv="Expires" content="-1" /> <meta http-equiv="Expires" content="-1" /> My Question: which is the better approach? Or, alternatively, are they equally acceptable? How do these all relate to different platforms? Which browsers would honor what headers? In addition, please feel free to add anything I've missed, if any.

    Read the article

  • Mounting NAS drive with cifs using credentials file through fstab does not work

    - by mahatmanich
    I can mount the drive in the following way, no problem there: mount -t cifs //nas/home /mnt/nas -o username=username,password=pass\!word,uid=1000,gid=100,rw,suid However if I try to mount it via fstab I get the following error: //nas/home /mnt/nas cifs iocharset=utf8,credentials=/home/username/.smbcredentials,uid=1000,gid=100 0 0 auto .smbcredentials file looks like this: username=username password=pass\!word Note the ! in my password ... which I am escaping in both instances I also made sure there are no eol in the file using :set noeol binary from Mount CIFS Credentials File has Special Character chmod on .credentials file is 0600 and chown is root:root file is under ~/ Why am I getting in on the one side and not with fstab?? I am running on ubuntu 12 LTE and mount.cifs -V gives me mount.cifs version: 5.1 Any help and suggestions would be appreciated ... UPDATE: /var/log/syslog shows following [26630.509396] Status code returned 0xc000006d NT_STATUS_LOGON_FAILURE [26630.509407] CIFS VFS: Send error in SessSetup = -13 [26630.509528] CIFS VFS: cifs_mount failed w/return code = -13 UPDATE no 2 Debugging with strace mount through fstab: strace -f -e trace=mount mount -a Process 4984 attached Process 4983 suspended Process 4985 attached Process 4984 suspended Process 4984 resumed Process 4985 detached [pid 4984] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4984] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = -1 EACCES (Permission denied) mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Process 4983 resumed Process 4984 detached Mount through terminal strace -f -e trace=mount mount -t cifs //nas/home /mnt/nas -o username=user,password=pass\!wd,uid=1000,gid=100,rw,suid Process 4990 attached Process 4989 suspended Process 4991 attached Process 4990 suspended Process 4990 resumed Process 4991 detached [pid 4990] --- SIGCHLD (Child exited) @ 0 (0) --- [pid 4990] mount("//nas/home", ".", "cifs", 0, "ip=<internal ip>,unc=\\\\nas\\home"...) = 0 Process 4989 resumed Process 4990 detached

    Read the article

  • How to bypass resume from hibernate

    - by Daniel Trebbien
    I am attempting to resume a Windows Vista laptop from hibernate, but the resume process seems to be stuck in an endless loop in which Windows is repeatedly trying to read from the optical drive. When I press the Power On button on the laptop, the screen is black (not even the backlight turns on) and the following occurs in a loop: Five seconds pass and I hear the optical drive being accessed. (There's no disk in the drive, so it sounds like a short buzzing noise.) Two seconds pass and I hear the optical drive being accessed. Two seconds pass and I hear the optical drive being accessed. So it's three short buzzing noises in a row, over and over again. Eventually I have to abruptly power off the machine. I have tried inserting a data CD into the drive as well as a bootable CD (a live Linux distro boot disk). For both, the optical drive spins up for a bit, but stops after Windows decides that the disk is not what it is looking for. I have since lost the Windows Vista recovery DVD, but I don't know if inserting the recovery disk into the optical drive would have a different effect than the bootable CD. I have tried pressing F8 immediately after pressing the Power On button (hoping to enter System Restore), but that did not have an effect. Is there a special key sequence that will cause Windows to bypass resuming from hibernate, effectively ignoring hiberfil.sys?

    Read the article

  • Resource reference passing in puppet

    - by paweloque
    Is it possible to pass puppet resource references to other resources? My use-case is to build a jenkins build pipeline with puppet. To chain jenkins jobs into a pipeline I need to pass the successor job to a job. A subset of the definition is: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Deploy', } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => 'Smoke Test', } In the def you see that I define the successors by name, i.e. 'Deploy' and in case of the second job 'Smoke Test'. What I'd like to do is to pass a reference to a resource and extract the name from it: jobs::build { "Build ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Deploy["Deploy ${release_name}"], } jobs::deploy { "Deploy ${release_name}": release => $release_name, jenkins_jobs_path => $jenkins_jobs_path, successors => Jobs::Smoke_test["Smoke Test ${release_name}"], } And then within the jobs::deploy and jobs::build definition I'd access the resource by reference and query for it's type, etc.. Is it possible to achieve this in puppet?

    Read the article

  • Communicating via Command Mode with IBM HS22 IMM via AMM

    - by MikeyB
    On previous model blades that contained a BMC, I was able to communicate from our external management station via pass-through commands to the BMC to do things such as power blades on/off, set VPD parameters, reboot the BMC, etc. Now on the HS22, a bunch of things happen differently. For example, we can no longer use the same pass-through commands to write VPD information pages and have them persist across reboots of the IMM - it looks as though those VPD pages are populated from information contained in the IMM. How do we use the Advanced Settings Utility from an external host to communicate with HS22 IMMs? Alternatively, what TCP Command Mode commands do we need to send to the AMM to communicate with the IMM? For our purposes, we specifically cannot communicate with the IMM from the blade itself. Specific example: When I send a pass-thru IPMI command via the AMM to the blade BMC to write information (such as MTM, Serial) into VPD page 0x10, it persists on blades with a BMC (HS21 for example). I can send the same IPMI command to write data to the VPD page on the HS22, however it does not persist across reboots of the IMM. What IPMI commands do I need to send to the IMM? What IPMI commands are asu sending when it sets the MTM & Serial?

    Read the article

  • How to bypass resume from hibernate [closed]

    - by Daniel Trebbien
    I am attempting to resume a Windows Vista laptop from hibernate, but the resume process seems to be stuck in an endless loop in which Windows is repeatedly trying to read from the optical drive. When I press the Power On button on the laptop, the screen is black (not even the backlight turns on) and the following occurs in a loop: Five seconds pass and I hear the optical drive being accessed. (There's no disk in the drive, so it sounds like a short buzzing noise.) Two seconds pass and I hear the optical drive being accessed. Two seconds pass and I hear the optical drive being accessed. So it's three short buzzing noises in a row, over and over again. Eventually I have to abruptly power off the machine. I have tried inserting a data CD into the drive as well as a bootable CD (a live Linux distro boot disk). For both, the optical drive spins up for a bit, but stops after Windows decides that the disk is not what it is looking for. I have since lost the Windows Vista recovery DVD, but I don't know if inserting the recovery disk into the optical drive would have a different effect than the bootable CD. I have tried pressing F8 immediately after pressing the Power On button (hoping to enter System Restore), but that did not have an effect. Is there a special key sequence that will cause Windows to bypass resuming from hibernate, effectively ignoring hiberfil.sys?

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load...

    - by redguy..pl
    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Rundeck get verbose output of command executing on node

    - by Leon Stafford
    I have Rundeck executing a remote script, which is in python is using print statements to return output normally such as: $ python mytest.py PASS: Condition 1 passed PASS: Condition 2 passed PASS: and so on... When I run this via Rundeck, however, it doesn't show me the same print generated outputs as above. In Rundeck's most detailed Debug output mode, I only receive the following: 06:31:12 Permanently added 'myremotenode.com' (RSA) to the list of known hosts. 06:31:12 SSH_MSG_NEWKEYS sent 06:31:12 SSH_MSG_NEWKEYS received 06:31:12 SSH_MSG_SERVICE_REQUEST sent 06:31:13 SSH_MSG_SERVICE_ACCEPT received 06:31:13 Authentications that can continue: publickey,password,keyboard-interactive 06:31:13 Next authentication method: publickey 06:31:13 Authentication succeeded (publickey). 06:31:13 /cygdrive/c/Program Files (x86)/Mozil... 06:32:06 Adding reference: ant.PropertyHelper 06:32:06 Setting project property: sshexec.output -> /cygdrive/c/Prog... I know that the remote script is actually executing just as usual, as I'm receiving other emails generated by the ~30min long script. Obviously, I don't want to have to wait 30mins to see the result of each print statement within the python script. How can I get the same level of output in Rundeck as I do in the bash shell directly?

    Read the article

  • OpenBSD logins via SSH seem to be ignoring my configured radius server

    - by Steve Kemp
    I've installed and configured a radius server upon my localhost - it is delegating auth to a remote LDAP server. Initially things look good: I can test via the console: # export user=skemp # export pass=xxx # radtest $user $pass localhost 1812 $secret Sending Access-Request of id 185 to 127.0.0.1 port 1812 User-Name = "skemp" User-Password = "xxx" NAS-IP-Address = 192.168.1.168 NAS-Port = 1812 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=185, Similarly I can use the login tool to do the same thing: bash-4.0# /usr/libexec/auth/login_radius -d -s login $user radius Password: $pass authorize However remote logins via SSH are failing, and so are invokations of "login" started by root. Looking at /var/log/radiusd.log I see no actual log of success/failure which I do see when using either of the previous tools. Instead sshd is just logging: sshd[23938]: Failed publickey for skemp from 192.168.1.9 sshd[23938]: Failed keyboard-interactive for skemp from 192.168.1.9 port 36259 ssh2 sshd[23938]: Failed password for skemp from 192.168.1.9 port 36259 ssh2 In /etc/login.conf I have this: # Default allowed authentication styles auth-defaults:auth=radius: ... radius:\ :auth=radius:\ :radius-server=localhost:\ :radius-port=1812:\ :radius-timeout=1:\ :radius-retries=5:

    Read the article

  • Passing two arguments to a command using pipes

    - by firebat
    Usually, we only need to pass one argument: echo abc | cat echo abc | cat some_file - echo abc | cat - some_file Is there a way to pass two arguments? Something like {echo abc , echo xyz} | cat cat `echo abc` `echo xyz` I could just store both results in a file first echo abc > file1 echo xyz > file2 cat file1 file2 But then I might accidentally overwrite a file, which is not ok. This is going into a non-interactive script. Basically, I need a way to pass the results of two arbitrary commands to cat without writing to a file. UPDATE: Sorry, the example masks the problem. While { echo abc ; echo xyz ; } | cat does seem to work, the output is due to the echos, not the cat. A better example would be { cut -f2 -d, file1; cut -f1 -d, file2; } | paste -d, which does not work as expected. With file1: a,b c,d file2: 1,2 3,4 Expected output is: b,1 d,3 RESOLVED: Use process substitution: cat <(command1) <(command2) Alternatively, make named pipes using mkfifo: mkfifo temp1 mkfifo temp2 command1 > temp1 & command2 > temp2 & cat temp1 temp2 Less elegant and more verbose, but works fine, as long as you make sure temp1 and temp2 don't exist before hand.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >