Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 195/278 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Cant connect to mysql using self signed SSL certificate

    - by carpii
    After creating a self-signed SSL certificate, I have configured my remote mysqld to use them (and ssl is enabled) I ssh into my remote server, and try connecting to its own mysqld using ssl (mysql server is 5.5.25).. ~> mysql -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Ok, I remember reading theres some problem with connecting to the same server via SSL. So I download the client keys down to my local box, and test from there... ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error Its unclear what this "SSL connection error" error refers to, but if I omit the -ssl-ca, then I am able to connect using SSL.. ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 37 Server version: 5.5.25 MySQL Community Server (GPL) However, I believe that this is only encrypting the connection, and not actually verifying the validity of the cert (meaning I would be potentially vulnerable to man-in-middle attack) The ssl certs are valid (albeit self signed), and do not have a passphrase on them So my question is, what am I doing wrong? How can I connect via SSL, using a self signed certificate? MySQL Server version is 5.5.25 and the server and clients are Centos 5 Thanks for any advice Edit: Note that in all cases, the command is being issued from the same directory where the ssl keys reside (hence no absolute path)

    Read the article

  • Lotus NotesSQL Driver - cannot install

    - by PowerUser
    Hi all, I need to install Lotus NotesSQL Driver (current version is 8.5) onto a virtual machine running XP. Here's what's I've done so far: I retrieved the file (CZOWFEN.zip) from the IBM website. I ran the exe. I then went to My Computer-Properties-Advanced-Environmental Settings-System Variables-Path and added "; c:\notessql" so the ODBC administrator could find Notes.ini (why the setup file didn't do this in the first place, i don't know). I opened up the ODBC administrator and tried to add a new System DSN to a Lotus DB. "The setup routines for the Lotus Notes SQL Driver (*.nsf) ODBC driver could not be loaded due to system error code 126" I redownloaded and reinstalled the driver (making sure I had the latest version 8.5). No luck. I checked the registry. All the file paths appeared to be correct. Per many, many similar cases on the internet, I tried several different variations of adding the various Lotus Notes folders to my PATH variables. Same error. I've done this setup on 5 different machines now with no problem. The only difference here is that this machine is virtual. Ideas?

    Read the article

  • Where can I ask questions that aren't IT questions?

    - by Adam Davis
    Thank you for your confidence in our abilities But have you read this? FAQ We get a lot of interesting technical questions on here, but Server Fault is meant to be first and foremost a resource for system administrators and IT professionals. In other words, Server Fault isn't meant to cater to every computer problem - just those that only exist in or can best be solved in the IT and sysadmin domain Yes, someone here might be able to help you, but you'll find that other forums more focused on your topic can give you a much better answer than a bunch of IT professionals. It's likely that your question will be downvoted, closed, and in some cases marked "offensive." It's not that we hate you, it's just that we like to keep our corn pops separate from our cocoa puffs. In that vein, here are a bunch of other forums where you can get help: (This list contains the top two forums for each category as voted for below.) Programming Stackoverflow Consumer Level Computer Hardware Ars Technica OpenForum Tom's Hardware Forums Computer Software Anandtech Forum Web Design/Hosting/CMS SitePoint Forums Web Hosting Talk Math, Science, Engineering Physics Forums PlanetMath Other/General Ask Metafilter Google Search Mahalo Answers Check out the answers below for even more suggestions! What forums can people go to to ask the questions that are off topic here? Please list only ONE forum per answer so votes can bring the best forums to the top. Return to FAQ Index

    Read the article

  • OSX esc key stops working (randomly)

    - by Jan Hancic
    I have a 2011 macbook Air with OS X Mountain Lion (I've upgraded from Lion). And ever since upgrading my escape key randomly stops working. And it's not that the keyboard on the laptop would be damaged as the same happens if I use apple's bluetooth keyboard. So I'm guessing it's a software issue. Also in some cases pressing ctrl+esc achieves the same thing as just esc so I'm 100% it's not a hardware problem. Does anybody have any idea what this might be all about and how to fix it? edit: the escape key stops working completely but it starts working again after I restart the computer. edit2: this usually happens after the computer wakes from sleep. So it's not like that is just stopps working in the middle of using it, but rather after I put it to sleep (or just close the lid) and then open it again. Has nobody got the same issue? Is there a better place to ask OSX related questions than SuperUser maybe?

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • Desktop SATA drives in SATA <-> FC array

    - by chris
    Let's assume you've got a box like one of these with space for 24 SATA disks. What are the best bits of advice for deploying this? For instance, should you be greedy and go for the 1.5 or 2tb disks or are they just not reliable enough to be used in an array like this and you should stick with 640gb or 750gb disks instead? Also, I know that FC (or generically, "enterprise class") disks have a different error recovery strategy than desktop disks. An enterprise disk will fail a read quickly and report to the controller that it wasn't able to read that block, and the RAID controller will quickly regenerate the info from the parity disk and mark the block as bad. A desktop disk, on the other hand, will try and try and try again to get the data, and in pathological cases this may cause a raid controller to fail the whole disk because the read operation times out. So there are a couple aspects to this question: What's the best sort of disk to get today? (ie specific disks on the market in Feb 2010) Generically, what should someone look for when trying to buy something like this that kinda walks the line between enterprise and consumer? Lastly -- is there anything that can be done with current "consumer" disks to make them more suitable for array use? IE can you use a SMART configuration to change the error recovery strategy used by the disk? Thanks!

    Read the article

  • Mouse wheel in VirtualPC (mostly) does not work on 64-bit Windows 7 RC

    - by JonStonecash
    I have recently upgraded my laptop from WinXP Pro (32-bit) to Windows 7 RC (64-bit). I have a number of VirtualPC 2007 images that I use for testing on various platforms and looking at beta software. I have installed the 64-bit version of VirtualPC. The images all work with the exception of the mouse wheel within the virtual machine. I have tried this out with WinXP Pro, Windows 7 RC, and Windows Server 2008 images. All are 32-bit and all exhibit the same behavior: a gentle rotation of the wheel does nothing; a quick rotation of the wheel sometimes gets a scroll and sometimes not. I regard this behavior as unusable as I tend to use the mouse wheel a lot. All of this worked just fine on WinXP. I have re-installed the Virtual machine additions on all of the machines. The Windows 7 RC virtual image was created after the upgrade to Windows 7 and the 64-bit version of VirtualPC (just to isolate the possibility that I had corrupted the images during the transition). I have googled, binged, and yahoo-ed. There are scattered mentions of this problem (dating back to VPC 2004) but no solutions. I am aware that I could start up one of these images and then use remote desktop connections to get access to that image. I, in fact, do just that for some development that I am doing; the mouse works just fine. This is acceptable in this case because I spend hours at a time in the development VM. These test environments are different in that I will bring up an image for just a short time: minutes rather than hours. Adding the rdc step is much more significant in these cases. Does anyone have any idea of what to do next?

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

  • TS (RD) Gateway Authentication Problem "The logon attempt failed"

    - by user2059
    I've been using TS Gateway to permit remote access for our staff for a few months now, and all has been well. Users either connect to a traditional terminal server desktop or hit our website and start an TS RemoteApp application- in both cases the connection is routed through a TS Gateway. However I came into work this morning to find that has stopped authenticating users through TS Gateway, each time returning "The logon attempt failed" as seen in the image even though the credentials are correct. It should be noted that everything works fine if the Gateway is taken out of the equation, it's the TS Gateway component that is causing these problems. Users experience this problem whether they connect through XP SP3, Vista or 7. On the server a total of 4 entries appear in the Windows security log at exactly the same time for each failed logon attempt: two 4624 "An account was successfully logged on" messages for the user, immediately followed by two 4634 "An account was logged off"s. This suggests that the server is accepting the credentials as correct, then booting the user off. Nothing at all is recorded in the NPS and Terminal Server logs. A reboot doesn't change things. Neither does completely removing and reinstalling the NPS and Terminal Server roles. I'm baffled as to how this can happen suddenly without warning. Any suggestions would be greatly appreciated.

    Read the article

  • What might cause https failure when not specifying SSL protocol?

    - by user35042
    I have a VBScript program that retrieves a web page from a server not under my control. The URL looks something like https://someserver.xxx/index.html. I use this code to create the object that does the page getting: Set objWinHttp = CreateObject("WinHttp.WinHttpRequest.5.1") When I wrote my program it had no problem retrieving this page. Recently, the web server serving this page went through an upgrade. Now my program can no longer fetch the page. Some clues: Clue 1. I can fetch the web page if I use a browser (I tried Firefox, IE, and Chrome). Clue 2. The VBScript code yields this error: The message received was unexpected or badly formatted. Clue 3. I can fetch the web page from the command line in certain cases but not in others: curl --sslv3 -v -k 'https://someserver.xxx/index.html' # WORKS! curl --sslv2 -v -k 'https://someserver.xxx/index.html' # WORKS! curl -v -k 'https://someserver.xxx/index.html' # FAILS curl --tlsv1 -v -k 'https://someserver.xxx/index.html' # FAILS In the case where I do not specify a protocol I get this error: * SSLv3, TLS handshake, Client hello (1): * error:14077417:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert illegal parameter * Closing connection #0 In the case where I specify --tlsv1 I get this error: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS alert, Server hello (2): * error:14094417:SSL routines:SSL3_READ_BYTES:sslv3 alert illegal parameter * Closing connection #0 A. Does anyone have any suggestions or ideas on what might be going on at the web server end (I am unable to talk to the admins of the web server to find out what they changed). B. Is there a way I can change my VBScript code to work around this issue? Can the SSL version be forced?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • Why isn't MediaWiki loading?

    - by E L
    I recently set up MediaWiki on an Apache server with PostgreSQL. It installed successfully. However, when I try to access the website, I get a blank page. The error log reports the following. [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Warning: require_once(/var/www/mediawiki-1.19.2/LocalSettings.php): failed to open stream: Permission denied in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 I've seen other people with similar problems and the solutions have involved using chmod on LocalSettings.php to 644 or in other cases 755. Others have said using chown to make LocalSettings match the Apache user, which is just 'apache' in my case. None of these solutions have worked for me. Does anyone have other suggestions or maybe I missed something?

    Read the article

  • Login box not shown on Ubuntu

    - by Alexandre
    I've installed Ubuntu 10.04 (64bit) as a guest OS in VirtualBox, using Windows7 Professional (64bit) as host. After Ubuntu install, I did installed Xfce4 (sudo apt-get install xfce4). Logged in using a Xfce session, and when I logged out, I couldn't see the login box anymore, only the regular gnome background from login screen. Then I restarted the virtual machine, and now I'm not able to see the login box anymore, only the gnome background. Does someone knows how to solve this? Thanks in advance Update: I've tried to use Xubuntu, that comes with Xfce. And I'm facing the same problem. As a common denominator from the two cases, now I see the problem arises after I've updated the system, and then installed curl (via apt-get), zlib (make process), git (make process). But I'm not sure this can be the cause for this ... Update: I isolated the problem. The bad guy is zlib, but I don't have a clue why. In fact, it does crash not only ubuntu and xubuntu, but also Debian (Yes, I've tested all). I'll keep this question in case someone have the same problem, but I think my initial question is not valid anymore.

    Read the article

  • Strange performance issue with Dell R7610 and LSI 2208 RAID controller

    - by GregC
    Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec Lowly PCIe x4 slot yields steady 1.2 GB/sec read Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases. SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected. I'd like to be able to rely on the sheer speed of x16 slots. What gives? What can I try? Any ideas? Please assist Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx Follow-up information We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}. The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array. Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).

    Read the article

  • Getting at fsid under Linux? Or an alternate way of identifying filesystems?

    - by larsks
    In an environment with automounted home directories, such that the same filesystem exported by a fileserver may be mounted multiple times on the client, I would like to authoritatively be able to identify whether two mountpoints are in fact the same filesystem. That is, if the remote server exports: /home And the local client has: # mount fileserver:/home/l/lars on /home/lars type nfs (rw...) fileserver:/home/b/bob on /home/bob type nfs (rw...) I am looking for a way to identify that both /home/lars and /home/bob are in fact the same filesystem. In theory this is what the fsid result of the statvfs structure is for, but in all cases, for both local and remote filesystems, I am finding that the value of this structure member is 0. Is this some sort of client-side issue? Or do most modern NFS servers simply decline to provide a useful fsid? The end goal of all of this is to robustly interpret the output from the quota command for NFS filesystems. For example, given the example above, running quota as myself may return something like: Disk quotas for user lars (uid 6580): Filesystem blocks quota limit grace files quota limit grace otherserver:/vol/home0/a/alice 12 52428800 52428800 4 4294967295 4294967295 fileserver:/home/l/lars 9353032 9728000 10240000 124018 0 0 ...the problem here being that there exists a quota for me on otherserver which is visible in the results of the quota command, even though my home directory is actually on a different device. My plan was to look up the fsid for each mountpoint listed in the quota output and check to see if it matched the fsid associated with my home directory. It looks like this won't work, so...any suggestions?

    Read the article

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • Remote NX login to Ubuntu, Gnome can't mount CD/DVD drive

    - by T.J. Crowder
    Even though I'm sitting next to it, I log into my Ubuntu 10.04 LTS system via NX Free Edition from another system at the moment (this is temporary, not worth buying a KVM for). Curiously, though, when I do that Gnome's auto-mounting fails for CD/DVD media (I haven't tried other kinds) with a "Not Authorized" error. For instance here's what happens when I put the Ubuntu 10.04 LTS installation CD in: This does not happen if I log into it locally (not via NX) with the same user account. When using NX, I can mount the media if I go to mount directly: tjc@midnight:~$ sudo mkdir /media/dvd tjc@midnight:~$ sudo mount -r -t iso9660 /dev/sr0 /media/dvd tjc@midnight:~$ ls /media/dvd autorun.inf casper dists install isolinux md5sum.txt pics pool preseed README.diskdefines ubuntu wubi.exe ...which, along with the "not authorized" error, suggests some kind of permissions problem to me (doh). What I find odd is that the same user is involved in both cases (local and via NX). I'm new to Ubuntu on the desktop (used it and other distros on servers for years), so I'm afraid I don't know how this auto-mounting is happening. I think it's handled by the gvfs package and its daemon, but that's about as far as I got (and perhaps I've taken a left turn even getting that far). Although I can work around it with mount, does anyone know how I might get auto-mounting to work?

    Read the article

  • Htpc aka "Media Center": cheap and *silent*?

    - by Unknown
    It may be me, or the place I live (Italy), but it seems pretty hard to get a build or a prebuilt nettop or a laptop that fits the need. I need something silent able to playback all h.264 fullhd content without stuttering, and well (and not loosing the hw acceleration because of softsubs...) silent not ugly silent and (possibly) cheap. I'm going the linux route, therefore i'm moving towards a cpu-based or nvida-integrated solution (i don't think ati hw accelerated playback - or the intel "hd" acceleration - is useable yet). Ion nettop; it's either the Acer Revo (but here it's incredibly pricey and it's hard to find the dualcore version) or the Asrock Ion 330, that in the current version is rated "silent" at 26Db. 26. Sounds pretty noisy to me!!! the previous version was even worse. was this product really aimed at htpc market?? the Dell Zino - i think it's ATI based unfortunately. Laptop: correct me if I'm wrong: sub 600€/$ units are quite loud under full load (because of the tiny fans). ULW laptops are indeed quite similar: tiniest fans = high pitched noise and the cpu still lacks power for non hd-accelerated video decoding Handmade build: little money can be saved with underpowered cpus, a low-midrange cpu would help in the case of non-hw-accelerated content the cases are quite pricey the PSU one has to get ranges between 100/150 €/$ minimum to keep the noise down a low-mid build, all included, sums up to over 650 €/$ for a still-looking-ugly-unit, without the blu-ray drive. Please help. What do you advise on this? ;) Am I ignoring laptops too much, maybe? Are low-priced Acers that noisy/high pitched under full load?

    Read the article

  • SIP and NAT routers?

    - by OverTheRainbow
    Hello SIP was not built with NAT routers in mind, and I'd like to get to the bottom of this issue to check what needs to be done on all devices so it works with NAT routers, and understand in what context it just can't be used and I should check more NAT-friendly alternatives like IAX. A picture being worth a thousand words, here's the layout I need to use: http://img62.imageshack.us/img62/4077/sipandnatrouters.jpg The PBX server is located in the private LAN behind a NAT router connected to the Internet (I know it'd be easier if it were located in the public network, but this router doesn't support DMZ's so the server has to be in the private network) A couple of (soft|hard)phones are located on the same LAN and connected to the PBX server, along with a PSTN gateway (Linksys 3102 or a Digium PCI card) Remote users using (soft|hard)phones are located somewhere on the Net with dynamic IP's and are also located behind NAT routers I may or may not have control over the local NAT router where the PBX server is located, but I have no control over the remote NAT routers, either because the users don't have the computer knowledge to map ports or because the routers are off-limit (eg. web cafés, hotel LAN's, etc.) Is it possible to configure the PBX server, the (soft|hard)phones, and the PSTN gateway so that the all conversations work fine, no matter the endpoints (POTS caller/local phone, POTS caller/remote phone, local phones, remote phone/local phone)? In which cases may I expect problems, and are there solutions? FWIW, I'm leaning toward using Freeswitch, but I could end up using Asterisk if there are technical advantages to it in this context. Thank you for any info.

    Read the article

  • Outlook users connected to exchange can email from other email accounts

    - by Sherriffwoody
    We have found an issue on our systems whereby an outlook user (both 2007 and 2010) connected to our Exchange server (2007) can send emails as other users using the following steps Within Outlook Click <New Email> Select the <From> button to show a list of accounts outlook contains, but it also shows the option Select<Other Email Address>. This brings up a small dialog box with another button which when selected allows the user to select an email from their contacts or the Active Directory. The user in most cases can select any email within the Active Directory and send an email as if it were coming from that selected email. It seems not everyone has this ability and I'm guessing it is something to do with settings in exchange or AD(version 6) or is there a group policy that can be implemented to stop users being able to do this. We have no idea what allows this and I have failed to find anything using Dr Google. No one has setup delegates within outlook but it does seem to be something similar? Does anyone know how to lock this down? Thanks in advance

    Read the article

  • Should I upgrade to Symantec Endpoint Protection? [closed]

    - by Alex C.
    I'm the IT manager at an animal shelter in Upstate New York. We have a Windows network with about 50 desktops running Windows XP Pro. We used to use CA eTrust Antivirus, but that product didn't work too well (too many infections got through). About six months ago, we switched to using Symantec Antivirus Corporate Edition ver. 10.1.8.8000. If anything, the Symantec product is even worse. The last six weeks in particular have been very bad -- we've had about seven or eight PCs get hit with those malware infections that masquerade as antivirus software. In most of those cases, Symantec didn't even flag the malware at all. So... what gives with the Symantec Antivirus? As far as I can tell, it's installed correctly and downloading updated definitions nightly. I can upgrade to Symantec Endpoint Protection for $220 (we get non-profit pricing), but I don't want to do it if it's not going to be significantly better. Any advice? Should I switch to something else entirely? Thanks!

    Read the article

  • Windows Server 2008 Task Scheduler: Task Started (Task=100) but did task did not complete (Task=102) when the result code is 2

    - by MacGyver
    Can someone give me a use case for setting up a Windows Server 2008 Task Scheduler task (we'll call this "test") that completes (action completed is task=201) with an error (result code=2)? This is event trigger code for another task (called "notification" that sends out an email based on the event history of the "test" task. I've got use cases for tasks that opens a program successfully and when a program fails to find the program. I'm just trying to think of how I can test a scenario when it finds the program, but something fails with warnings or errors. /* Failed - task started but had errors (result code of 2) */ <QueryList> <Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational"> <Select Path="Microsoft-Windows-TaskScheduler/Operational"> *[ System [ Provider[@Name='Microsoft-Windows-TaskScheduler'] and (Level=0 or Level=1 or Level=2 or Level=3 or Level=4 or Level=5) and (Task = 201) ] ] and *[ EventData [ Data [ @Name='TaskName' ]='\Tasks\test' ] ] and *[ EventData [ Data [ @Name='ResultCode' ]='2' ] ] </Select> </Query> </QueryList>

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >