Search Results

Search found 18121 results on 725 pages for 'support esemenyek'.

Page 586/725 | < Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >

  • Can I use IIS to do ActiveDirectory single-sign-on for another website?

    - by brofield
    I'm trying to add Active Directory single-sign-on support to an existing SOAP server. The server can be configured to accept a trusted reverse-proxy and use the X-Remote-User HTTP header for the authenticated user. I want to configure IIS to be the trusted proxy for this service, so that it handles all of the Active Directory authentication for the SOAP server. Basically IIS would have to accept HTTP connections on port X and URL Y, do all the authentication, and then proxy the connection to a different server (most likely the same X and Y). Unfortunately, I have no knowledge of IIS or AD (so I am trying my best to learn enough to build this solution) so please be gentle. I would assume that this is not an uncommon scenario, so is there some easy way to do this? Is this sort of functionality built into IIS or do I need to build some sort of IIS proxy program myself? Is there a better option for getting the authentication done and the X-Remote-User HTTP header set than requiring IIS? Update: For example, what I am trying to create is: [CLIENT] [IIS] [AD] [SOAP-SERVER] 1. |---------------->| 2. |<--------------->|<---------->| 3. |--------------------------->| 4. |<---------------------------| 5. |<----------------| 1. POST to http://example.com/foo/bar.cgi 2. Client is not authenticated, so do authentication 3. Once validated, send request to server (X-Remote-User: {userid}) 4. Process request, send response 5. Forward response to client I need to know how to configure IIS to do the automatic authentication of the user using AD, and then to proxy the request to the actual server, sending the userid in the X-Remote-User HTTP header.

    Read the article

  • My New Dell Workstation T-7500 gives electric shock

    - by Dan
    Just bought Dell T-7500 Workstation. When installation technician came to Install the workstation, he got an electric shock when he touched the Start Button. He also got shock when he touched Front Panel. No shock when touching rest of the Chassis. He called Dell Support & tried to troubleshoot by taking out various wires etc but did not help. I touched the same places & I also got shock. We checked everything possible including connecting to various outlets but it didn't solve the problem. Installation subcontractor said that they are not supposed to troubleshoot anything on new system, just install it & they made notes & left. I called my sales guy & it was weekend so he said he will take care of it on Monday. Here are my concerns : (1) Why does that happen ? Although it is a mild shock & it won't kill you but might damage other parts of workstation ? (2) Is this common for workstations ? (3) What should I do ? Of course Dell will try to troubleshoot again but should I let them do that OR ask for a New System ? (4) Wheat would happen if I continue using it till the new system arrives ? Thank You.

    Read the article

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • Monitor reset itself and now I can't set the resolution/settings back to how it was before

    - by verve
    I've had my LG 24" widescreen monitor since 2009 and 2 weeks ago I noticed the monitor turned itself off (never had it done this before) so I switched it back on to find all the settings like gamma, resolution etc. different = looked like it had been reset. Everyone in the house swears they never unplugged and plugged it back in. When I opened a webpage the fonts and zoom on the pages were different and my desktop was strange too; fonts of the icons were different etc. The screen seems blurry and when I watch movies the faces look distorted so I thought I would try to first figure out the resolution it used to be but when I go under "Adjust screen resolution" none of the options work and there is no recommended resolution marked; all the options stretch out the screen and looks terrible so right now I have it set to the least distorted one. Then since the resolution wasn't working I set the other manual settings(done by physical buttons on the monitor) back to how it used to be (luckily, I had written these down). The monitor looks better but the resolution makes it a strain to use. I thought maybe some Windows update caused this crap so I tried to System Restore: didn't work. What went wrong? A few questions: 1) What was the likely cause of the monitor shutting down itself and screwing up the settings I have been using since the day I bought the monitor? 2) Why have the fonts changed everywhere unless this is a HDD/video card problem? 3) How do I find the perfect resolution it used to be? The monitor wants me to set it to 1920 x 1080 but that isn't one of the options although I don't remember what resolution I used before. I use the 16:9 setting while I try the available resolution options but nothing looks good! How do I find what it used to be? Manual available in PDF under Support: http://www.lg.com/ca_en/computer-products/monitors/LG-lcd-monitor-W2442PA-BF.jsp Win 7. IE 9.

    Read the article

  • Enabling hardware acceleration and Xinerama for multi-monitor/multi-GPU in Linux

    - by mynameiscoffey
    My current setup is three monitors connected as follows (monitors listed from left to right): GPU0 (nVidia GTX 280): - Dell 2405FPW (1920x1200) - Dell U2410 (1920x1200) GPU1 (nVidia 210): - Dell 2405FPW (1920x1200) Works like a charm in Windows 7, not so much in Linux. I seem to only have three real options: Run all three monitors as a seperate X screen, I get hardware acceleration but as they are all independent X sessions I cannot move windows between them and can only have firefox open on one at any given time. Run the two on GPU0 in TwinView mode and have GPU1 as a seperate X screen. Same limitation as 1 but at least two monitors work together ok. I did have an issue where occasionally Linux saw both monitors on GPU0 as a single large monitor however. Enable Xinerama and have everything work as I want it to but hardware acceleration is gone and the display is Windows 95 style choppy. My ideal solution would be to have all screens working as they do under Xinerama without the limitation of having hardware acceleration disabled. I don't even care if that means rendering all three on GPU0 and somehow farming out the display of the third monitor to GPU1, whatever works. My question is this: is there any way to accomplish this? I don't feel like my use case is so out there that there shouldn't be at least some form of support (beyond the three limited options presented above), or is my best option going to be to just suck it up and pick up a better card to replace both that can handle three outputs by itself?

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • CloudFlare dashboards empty, or performance issues

    - by Katafalkas
    I wanted to test CloudFlare performance so I set my image gallery domain on it and started testing. I have added PageRules for caching. And chose the Security: Essentially Off. I checked NS check tools and they say that my domain name is propagated with CloudFlare. For testing purpose I created a link that loads 200 images from that server, and was using loads.in website to determine how much it is faster. After trying few regions, I noticed that there were no improvement in loading speed. So I looked up the dashboards, and it was empty. I am not sure if I am doing something wrong, or made some error in my setup, or it takes few days to start caching or working properly, but at the moment - after a day of testing - dashboards are empty. Also the NS check tools sais that all name servers are propagated to CloudFlare and working fine. So I assume I got a bad performance because it is simply not working. I sent a letter to CloudFlare support team, but did not get any straight answer. So essentially my question is: Anyone has any experience with CloudFlare ? How long does it take for it to start caching static content to CDN ? Or there is simply something I am doing wrong ?

    Read the article

  • Computer won't go into standby

    - by Robert
    When I select Start-Turn off computer-Standby the 'turn off computer' option window closes, and then nothing else happens. I can start new applications, and Windows acts like I never selected standby. I ran it for several hours after that. If I have a TV program scheduled to record when I select standby I get a window (the Pinnacle TV software) asking if I'm sure, there are programs scheduled to record - and the computer just keeps running after I select yes, never going into standby. I added that detail as it shows the standby process is starting. [This problem also happens if a TV program is not scheduled, so the scheduler task in not running/in memory. This problem happens regardless of whether I'm not watching TV. This problem happens regardless of whether Media Center is running (it usually isn't, I'm using Pinnacle to watch TV).] I looked at "How to troubleshoot hibernation and standby issues in Windows XP" http://support.microsoft.com/kb/907477 - ACPI is enabled, and "standby" is an option in "Power Options Properties." So it appears to be setup correctly. Windows XP SP3 Media Center Edition, all current updates installed.

    Read the article

  • Access keystore on Sun ONE Webserver 6.1 for 2048 bit key length SSL

    - by George Bailey
    We want to get 2048 bit key length CSR requests. The browser based GUI provides us with a 1024 bit CSR and I don't know how to change that. It seems that 1024 bit key lengths will no longer supported by SSL companies. (Lower cost options only support 2048 bit. Thawte who is much more expensive say they accept 1024 for only one or two year certificates, but not 3). The legacy systems in question are running Sun ONE Webserver 6.1. Upgrading would be time consuming and we would rather not have to do that right now. We will be phasing these out but it will take awhile, so... Got it!! http://middlewarekb.wordpress.com/2010/06/30/how-to-generate-2048-bit-keypair-using-sun-one-or-iplanet-6-1-servers/ It is for the same version webserver I am using. /opt/SUNWwbsvr/bin/https/admin/bin/certutil -R -s "CN=sub.domain.ext,OU=org unit,O=company name,L=city,ST=spelled state,C=US,E=email" -a -k rsa -g 2048 -v 12 -d /opt/SUNWwbsvr/alias -P https-sub.domain.ext-hostname- -Z SHA1 Previous efforts edited out.

    Read the article

  • RRAS with DHCP when the IP pool is on a different subnet

    - by John B
    I run a small business network and the last couple of days I have been setting up some equipment to add VPN capabilities to our network. I've got the following set up: Windows 2008 R2 with RRAS - 172.22.200.50 Cisco RV082 router - 172.22.100.1 / 172.22.200.1 The Cisco router only support DHCP on a single class C network; 172.22.100.0/24. On the Cisco router I have set up an additional subnet; 172.22.200.0/24. The DHCP range is 172.22.100.200-254 When a PPTP connection comes in to the router, it is forwarded to my RRAS at 172.22.200.50. If I configure RRAS to assign IPs from a static pool on the 172.22.200.0/24 subnet everything works fine except the DNS suffix / search domain. However, if I set RRAS to use DHCP I am no longer able to contact any devices on the network. The IP I receive is on a different subnet (172.22.100.0/24). Is it possible to still use DHCP as the method of ip assignment in RRAS, even when the IP adresses assigned are in a different subnet? If yes, what piece of configuration am I missing to fix the VPN connection issues mentioned in the paragraph above. The reason I want RRAS with DHCP to work is because from what I have understood, this is the "only" way to hand out a DNS suffix to VPN clients. Any help on this matter is greatly appreciated!

    Read the article

  • How to add a privilege to an account in Windows?

    - by mark
    Given: A VM running Windows 2008 I am logged on there using my domain account (SHUNRANET\markk) I have added the "Create global objects" privilege to my domain account: The VM is restarted (I know logout/logon is enough, but I had to restart) I logon again using the same domain account. It seems still to have the privilege: I run some process and examine its Security properties using the Process Explorer. The account does not seem to have the privilege: This is not an idle curiousity. I have a real problem, that without this privilege the named pipe WCF binding works neither on Windows 2008 nor on Windows 7! Here is an interesting discussion on this matter - http://social.msdn.microsoft.com/forums/en-US/wcf/thread/b71cfd4d-3e7f-4d76-9561-1e6070414620. Does anyone know how to make this work? Thanks. EDIT BTW, when I run the process elevated, everything is fine and the process explorer does display the privilege as expected: But I do not want to run it elevated. EDIT2 I equally welcome any solution. Be it configuration only or mixed with code. EDIT3 I have posted the same question on MSDN forums and they have redirected me to this page - http://support.microsoft.com/default.aspx?scid=kb;EN-US;132958. I am yet to determine the relevance of it, but it looks promising. Notice also that it is a completely coding solution that they propose, so whoever moved this post to the ServerFault - please reinstate it back in the StackOverflow.

    Read the article

  • VMWare Fusion cannot connect to the NAT connection on my Mac

    - by FFish
    I have been using VMWare Fusion on my Mac to check out my websites on localhost. Now I can't connect anymore with the NAT connection. There seems to be a problem with my IP address or Mac address? I have no idea what causes this, it was working fine before!? In the XP (SP2) VM, in the taskbar I see the Local Area Connection with the yellow warning icon. The bubble says: "This connection has limited or no connectivity. You might not be aisle to access the Internet or some network resources. For more information, click this message." Doing that opens up the Local Area Connection Status panel. In the Support tab, when I click the repair button I get following message: "Windows could not finish repairing the problem because the following action cannot be completed: Renewing IP address." I tried disabling my firewall and also XAMPP that I use as server on OSX. VMWware version: 3.1 VM: XP SP2 Mac OSX 10.6.3 Any help would be greatly appreciated.

    Read the article

  • Windows XP only loads in VGA mode, and crashes when raising resolution

    - by Harel
    My kid's computer (Windows XP, SP3) started to (what appears to be) crash on boot. It will only boot in Safe or VGA mode, and if I try to raise the resolution from 640x480 it just reboots itself, and a error appears in the Event Log. When it loads up not in VGA mode, the monitor shuts off just after the windows logo is shown. It seems like windows is actually running but I can't see anything on screen (monitor is off for lack of signal). Nothing was installed recently that I know of, short of the usual windows updates. Thanks, Harel Below is the event log error: Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 15/04/2012 Time: 16:27:11 User: N/A Computer: ----- Description: Error code 1000008e, parameter1 c0000005, parameter2 f745b0bf, parameter3 ede24f98, parameter4 00000000. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 38 1000008 0020: 65 20 20 50 61 72 61 6d e Param 0028: 65 74 65 72 73 20 63 30 eters c0 0030: 30 30 30 30 30 35 2c 20 000005, 0038: 66 37 34 35 62 30 62 66 f745b0bf 0040: 2c 20 65 64 65 32 34 66 , ede24f 0048: 39 38 2c 20 30 30 30 30 98, 0000 0050: 30 30 30 30 0000

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Investigating a potential CPU failure

    - by Jernej
    On a Ubuntu server that I am using for computations I have recently observed that some CPU extensive programs (GUROBI,CPLEX) often segfault. Being in correspondence with tech support of the respective programs I was suggested that it may be a hardware issue. The administrator of the server performed a detailed memtest and it turned out that the RAM modules appear to be fine. Hence I've used the tool mprime to test the CPU and the following two lines appear multiple times durring the execution of the stress tests: [Worker #4 Oct 18 18:47] FATAL ERROR: Rounding was 0.498046875, expected less than 0.4 [Worker #4 Oct 18 18:47] Hardware failure detected, consult stress.txt file. The stress.txt file in itself is not very verbose about what could be the cause of this error so I would like to ask whether anyone here happens to know what could be the cause of this issue? Is there some other test I could perform to nail the problem even further? The temperature of the system (and all cores) was fine during the entire stress test (+69.0°C (high = +80.0°C, crit = +98.0°C)) the CPU in question is a Intel Core i7-2600K CPU @ 3.40GHz and is not overclocked or modified in any way. Also what is interesting that if I run mprime to only stress the CPU all tests pass fine. The error is only triggered when I let mprime stress the CPU+RAM.

    Read the article

  • Unable to remove Read-Only attribute from folder in Windows XP

    - by elcuco
    I have this directory which I cannot remove the read only attribute from. The computer is running XP SP2 (or SP3, not sure) and the directory sits in a NTFS file system. Looking into the web I found this: http://support.microsoft.com/kb/256614 which tells that if the directory is "customized" it's treated as a system folder and thus "read only". I don't think this is a scenario in my case, but anyway it's not helping, their recommendation is more or less: attr -r -s /d /s d:\data and this is not working for me. Any other ideas? More info: The directory is served to an HTTP server (wamp) and the directory is an SVN check out. What happens is that the web server cannot write files into the directory (imagechace from drupal is you are really interested). Edit 2: The original post claimed that the directory sits on a VFAT FS, however I booted Fedora 11 from livecd and the partition is marked as NTFS. Edit 3: I left the company which I worked on, on which this situation happened... so I cannot fully close this question. But things get even worse: I tested the "attr -r" answer I put, it did not work for me, and now the developer said that it worked for her. A nice WTF moment. Probably a reboot helped... Sorry for loosing details. If anyone has the same problem, and one of the answers helps him - please comment.

    Read the article

  • Web-based SVG or JavaScript Org Chart or Tree Graph Plotting Visualization API

    - by asoltys
    Hi, I'm looking to build an interactive web-based org chart for a large organization. I somewhat like the interface at ancestry.com where you can hover over people and pan/zoom around and click on different nodes to make them the root. Ideally, I'd like it if people could belong to multiple organizational entities like committees, working groups, etc. In other words the API should support graphs in general, not just trees. I'd like to be able to visually explode each organizational substructure into substituents by clicking on it, with a nice animation of the employees ballooning or spilling out so you can really interactively drill down through the organization. I found http://code.google.com/apis/visualization/documentation/gallery/orgchart.html but it looks a bit rudimentary. I know there are desktop tools like OrgPlus and Visio that can build static charts but I'm really looking for a free, web-based API with open standards-based output like SVG or HTML5 Canvas elements rather than Flash or some proprietary output. Something I can embed into a custom web application and style myself. Something interactive.

    Read the article

  • How to install IE9 when KB2120976 is not applicable to my Windows 7 x86 Ultimate edition?

    - by CVertex
    I'm trying to install IE9, visit http://beautyoftheweb.com, download it and then says it's downloading prereqs. After a minute, the installer says there's a problem and directs me to Prerequisites for installing Internet Explorer 9 Beta I click on the x86 installers one by one... most say "already installed".. But http://support.microsoft.com/kb/2120976/ says "The update is not applicable to your computer". Lame. So I try the IE9 install and the go through the whole process again with the same result. I discovered there's an IE9 install log at C:\Windows\IE9_main.log which logs the install process and reports an Error for 2120976 00:06.630: ERROR: Error installing prerequisite file (C:\Users\Vijay\AppData\Local\Temp\IE98036.tmp\KB2120976_x86.msu): 0x80240017 (2149842967) 00:06.677: INFO: PauseOrResumeAUThread: Successfully resumed Automatic Updates. 00:12.090: INFO: Link clicked, opening URL in new window:'http://go.microsoft.com/fwlink/?LinkId=185111' 00:12.106: INFO: Setup exit code: 0x00009C47 (40007) - Required updates are missing from the system.are missing from the system. Any idea why KB2120976 is inapplicable to my LEGAL Windows 7 Ultimate system? Any help is greatly appreciated.

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • I am looking for a tool to measure or detect "unresponsiveness" of a desktop PC

    - by Tom H
    I have a client that provides some server systems to a hospital, and a support ticket was raised that the desktop application was hanging waiting for the server. We did some extensive testing and its pretty clear that the server is responsive, and the network is fine, and that the problem is on the client end. (no requests are received during the hang etc...) We take a look at the desktop machines and they should be fine, so we raise tickets with the software vendor who says that it must be the hardware, the hardware company says that it is the software, etc etc Anyway, so talking to the nurses, they say that these machines often "hang" for 30 seconds at a time, and sometimes during important moments where they need to get data for a patient who is unwell, such as charts and status. So I want to stick a client on these machines that would be able to detect arbitrary "unresponsiveness" of the keyboard/mouse and log that for analysis later. Obviously I am wary to suggest some application that takes resources and makes the problem even worse, so I would interested to see any tools that would detect these (is it correct to say that the keyboard interrupts are being discarded?) scenarios by looking for the OS discarding the interrupts, or whatever is appropriate here. so go on then serverfault, here is your chance to save a life.... ;-) Edit: I am starting to think that some of the tools associated with real time systems might be appropriate, at least as a diagnostic.

    Read the article

  • Virus ridden computer freezes on startup - can't access safe mode

    - by Eric
    Someone whom I love but who cannot be trusted with a live internet connection downloaded a particularly nasty virus that in turn downloaded a variety of unknown other viruses onto my home computer. The computer now freezes completely a few seconds after reaching the desktop and is unresponsive to any keyboard or mouse command. There are videos of my little kid on this hard drive that are not backed up and that I cannot bear to lose. But if I could get in there long enough to copy them off to an external drive I would have no problem doing a clean windows install to fix the problem; everything else is backed up online but the videos were too large. Normally I would start by going into safe mode but I have a large Dell monitor that doesn't show anything until the welcome screen appears. I think that I have gotten into the setup screen once or twice by mashing keys before I can see anything, but this monitor doesn't support that so I can't see what I'm doing to get it to boot from CD or anything else. I'm at my wits end. Any advice?

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • How to change libcurl SSL backend from gnutls to openssl on Ubuntu server

    - by Jayesh
    I am getting gnutls specific errors in my Tornado webserver while processing Google OpenID SSL responses. One of the suggestions I got from Tornado mailing list is to try OpenSSL backend instead of gnutls. But it doesn't seem to be straightforward on Ubuntu server (11.10). On Ubuntu server, gnutls is provided by libcurl3-gnutls package and openssl curl support is provided by libcurl4-openssl-dev package. (I don't know why the later is named 4 and dev, but I couldn't find any other openssl+curl package in apt-cache search). I had libcurl3-gnutls installed by default, but not libcurl4-openssl-dev. So I installed the later and restarted Torando instances. But that didn't seem to work. I still got same gnutls errors. I found old discussions on curl mailing lists regarding the problems of supporting different SSL backends to libcurl, but didn't find exactly how is it done today. So far my guess is openssl is built into libcurl and gnutls is provided through separate package (that will explain why there is no libcurl3-openssl). But how do I make libcurl to pick up openssl backend and not gnutls? Is there some option in libcurl/pycurl API to do this? I tried uninstalling libcurl3-gnutls, but apt-get prompted that it will also remove python-pycurl along with it. So that won't do.

    Read the article

  • Installing Windows 7 from USB on a Thinkpad T61

    - by Halik
    I am trying to install Windows 7 Professional from USB 3.0 flashdrive, on a Thinkpad T61. The problem is, Thinkpads BIOS will not detect the flash drive as bootable medium, and won't allow to boot from it. What I did: Enabled USB BIOS Support in BIOS (it was on by default) In startup menu, added USB HDD to boot order (it has '-' sign in front of it) Created Windows 7 install media with UNetbootin, WinUSB (linux tool) dd and Grub4DOS. As you can tell, currently, I only have access to Linux machine to make the flashdrive. What happens: The T61 BIOS shows '-USB HDD' in boot order menu. The '-' sign suggests that the plugged flash drive is currently not bootable. The same flashdrive (with the same Windows image on it) is booting without any problems on a Dell D430 and Lenovo Y550. Also, Ubuntu 12.04 install USB created with Unetbootin shows as bootable ('+' sign in BIOS boot order menu) and boots from the F12 boot menu. Additional info thinkwiki.org says that some Thinkpad BIOSes do not use MBR on flashdrives. It suggests using Extended-IPL boot loader, but the provided links are broken and there seems to be no mirrors. Solution: http://superuser.com/a/430186/54970

    Read the article

  • Mailman delivery troubles

    - by stanigator
    I'm not sure if this is a good place to ask this question. It's about mailing list management software called Mailman from GNU. Here are the details: Hosting provider: Vlexofree Domain: www.sysil.com with Google Apps Mailing List created from hosting cpanel: [email protected] I have registered a list of subscribers, and tried sending an email to [email protected]. I got the following error message: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 23si6479194ewy.44 (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.216.90.136 with SMTP id e8mr1469147wef.110.1264220118960; Fri, 22 Jan 2010 20:15:18 -0800 (PST) Date: Fri, 22 Jan 2010 20:15:18 -0800 Message-ID: <[email protected]> Subject: From: Stanley Lee <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0016e6dab0931bccc3047dcd2f1e - Show quoted text - Is there any way of fixing this problem? I would like to be able to have this mailing list to work through my hosting and domain. Thanks in advance.

    Read the article

< Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >