Search Results

Search found 18841 results on 754 pages for 'path finding'.

Page 474/754 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • Recovering drivers from previous installation?

    - by Walkerneo
    Yesterday I bought a new computer and I've been setting it up since this morning. The hard drive came with windows 7 home premium (x64) installed, but I decided to instead install windows 7 ultimate (x64) - this is what I'm currently using. Unfortunately, there were some drivers that were installed that I need in this installation. I only notice because the computer doesn't have the ethernet drivers, so I'm only able to connect to the internet via wireless. There are other drivers missing as well, but I'm not yet seeing the effects. I still have the Windows.old folder with the previous installation, which has the drivers. Is there any way to copy the necessary ones over? I tried the options for updating driver software through device manager with the path set to System32\drivers of the old installation, but it didn't find anything.

    Read the article

  • How can I tell what user account is being used by a service to access a network share on a Windows 2008 server?

    - by Mike B
    I've got a third-party app/service running on a Windows 2003 SP2 server that is trying to fetch something from a network share on Windows 2008 box. Both boxes are members of an AD domain. For some reason, the app is complaining about having insufficient permissions to read/write to the store. The app itself doesn't have any special options for acting on the authority of another user account. It just asks for a UNC path. The service is running with a "log on as" setting of Local System account. I'd like to confirm what account it's using when trying to communicate with the network share. Conversely, I'd also like more details on if/why it's being rejected by the Windows 2008 network share. Are there server-side logs on 2008 that could tell me exactly why a connection attempt to a share was rejected?

    Read the article

  • Email Deliverability on Yahoo is very poor. Any suggestions please?

    - by xarejay28x
    All other ISPs (Google, AOL, Hotmail) are fantastic, hitting 98-100% in the inbox. Yahoo is very random, and lately our deliverability has dropped drastically. All IPs are senders certified by Return Path and supposedly that automatically whitelists our IPs and allows us to send as many emails as we want (from what my boss says). Do I bother with applying to Yahoo's bulk sender form? I run every email campaign through: SpamAssassin (Excellent Scores) Test Accounts (for test deliverability) Old school HTML format I'm running out of ideas and I'm starting to be in the hot seat and I am very fearful for my job position. If you can offer any wise words i will be very grateful. Thank you in advance.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 31 (sys.dm_server_services)

    - by Tamarick Hill
    The last DMV for this month long blog session is the sys.dm_server_services DMV. This DMV returns information about your SQL Server, Full-Text, and SQL Server Agent related services. To further illustrate the information this DMV contains, lets run it against our Training instance that we have been using for this blog series. SELECT * FROM sys.dm_server_services The first column returned by this DMV is the actual Service Name. The next columns are the startup_type and startup_type_desc columns which display your chosen method for how a particular method should be started. The next columns status and status_desc display the current status for each of your Services on the instance. The process_id column represents the server process id. The last_startup_time column gives you the last time that a particular service was started. The service_account column provides you with the name of the account that is used to control the service. The filename column gives you the full path to the executable for the service. Lastly we have the is_clustered column and the cluster_nodename which indicates whether or not a particular service is clustered and is part of a resource cluster group, and if so, the cluster node that the service is installed on. This is a good DMV to provide you with a quick snapshot view of the current SQL Server services you have on your instance. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204542.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Ubuntu.sh on Android Phone

    - by pjtatlow
    So today I noticed something weird on my phone. I used a terminal emulator to see what I could do with it, and noticed that there is a file called ubuntu.sh. I tried to run it and got all sorts of permission denied errors, and then I decided to root my phone. But now I'm nervous to run it, does anyone know what it does or why it is there? edit I forgot to mention that I have an AT&T Morotola Atrix 4G running Android 2.3.6. Also when I use the app SSHDroid to go into my phone from my Ubuntu machine, I'm greeted with this: "The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To access official Ubuntu documentation, please visit: http://help.ubuntu.com/" Also, here are the contents of ubuntu.sh #!/bin/sh export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin export LD_LIBRARY_PATH=/usr/lib:/usr/local/lib # make sure no left-over pidfiles, etc. ####################################### rm -fr /var/run/* rm -fr /var/lock/* chmod 666 /system/usr/keychars/* rm -f /tmp/tab* mkdir -p /home/adas/Desktop chmod 755 /home/adas/Desktop chown -R adas.adas /home/adas/Desktop [ -x /usr/bin/firefox-install-profile ] && /usr/bin/firefox-install-profile [ -x /usr/local/bin/check-citrix-certs.sh ] && /usr/local/bin/check-citrix-certs.sh [ -x /usr/bin/migrate-webapps ] && /usr/bin/migrate-webapps # boot scripts ############## /etc/init.d/rc S # lock down /var for CTS ######################## chown root.adas /var/tmp chown root.adas /var/lock chmod 775 /var/tmp chmod 775 /var/lock chmod 666 /dev/socket/dbus chmod 666 /dev/null # runlevel 2 scripts #################### /etc/init.d/rc 2 cp /sdcard/*.lic /data/ chmod 666 /data/*.lic This is really strange, any ideas?

    Read the article

  • Understanding HTTP Cookies in Indy 10 for Delphi XE2

    - by Jerry Dodge
    I have been working with Indy 10 HTTP Servers / Clients lately in Delphi XE2, and I need to make sure I'm understanding session management correctly. In the server, I have a "bucket" of sessions, which is a list of objects which each represent a unique session. I don't use username and password to authenticate users, but I rather use a unique API key which is issued to a client, and has an expiration. When a client wishes to connect to the server, it first logs in by calling the "login" command, which is a path like this: http://localhost:1234/login?APIKey=abcdefghij. The server checks this API Key against the database, and if it's valid, it creates a new session in the bucket, issues a new cookie (unique string), and sets the response cookies with Success=Y and Cookie=abcdefghij. This is where I have the question. Assuming the client end has its own method of cookie management, the client will receive this login response back from the server and automatically save the cookies as necessary. Any future request from the client to the server shall automatically send along these cookies, and the client side doesn't have to necessarily worry about setting these cookies when sending requests to the server. Right? PS - I'm asking this question here on programmers.stackexchange.com because I didn't see it fit to ask on stackoverflow.com. If anyone thinks this is appropriate enough for stackoverflow.com, please let me know.

    Read the article

  • Introducing Oracle System Assistant

    - by B.Koch
    by Josh Rosen One of the challenges with today's servers is getting the server up and running and understanding what all of the steps are once you plug the server in for the first time. So many different pieces come into play: installing drivers, updating firmware, configuring RAID, and provisioning the operating system. All of these steps must be done before you can even start using the server. Finding the latest firmware and drivers, making sure you have the right versions, and knowing that all the different software and firmware components work together properly can be a real challenge. If not done correctly, such as if you separately downloading disk firmware or controller firmware that doesn't match the existing OS drivers, you could experience bugs, performance problems, and incompatibilities. Gone are the days of having to locate the tools and drivers media that shipped with the server only to find out that newer versions of software and firmware are available on the web. Oracle has solved these challenges in the new X3-2 family of servers by introducing Oracle System Assistant. Oracle System Assistant is an innovative tool that is built-in to every new x86 server. It provides step-by-step assistance with configuring the server, updating firmware and drivers, and provisioning the operating system. Once you have completed all of the steps in the Oracle System Assistant tool, the server is ready to use. Oracle System Assistant was designed to be easy and straightforward. Starting it is as simple as pressing F9 when the server is booting. You'll need a keyboard, monitor, and mouse or you can use the remote console feature of Oracle ILOM (Integrated Lights Out Manager) to access a virtual KVM to the server from any machine. From there Oracle System Assistant will walk you through each of the steps necessary to set up your server. After configuring the network settings for Oracle System Assistant, the next step is to check for any new software or firmware for the server. Oracle System Assistant connects back to Oracle using your My Oracle Support account and downloads any updates that were made available to you for this specific server. This is where you really start to see the innovation that went into Oracle System Assistant. Firmware for Oracle ILOM and BIOS, operating system drivers, and other system firmware (including for option cards and disk drivers) come as a single bundle, downloading as a single unit, that has been engineered and tested to work together by Oracle. Oracle System Assistant figures out the right combination for your server, so you don't have to. Now that the server has the latest firmware, Oracle System Assistant will next walk you through configuring the hardware. From Oracle System Assistant, you can configure many Oracle ILOM settings, including the network settings and initial user accounts. This ensures that ILOM is accessible and ready to use. Oracle System Assistant is where all parts of the server come together. In addition to communicating with Oracle ILOM and interacting with BIOS, Oracle System Assistant understands and can configure the storage subsystem. Before installing the operating system, Oracle System Assistant can detect the storage configuration and configure RAID for all disks in the system. At this point, the server is ready to be provisioned with the host operating system. You can use Oracle System Assistant to provision a supported OS, including Oracle Linux, Oracle VM, RHEL, SuSe Linux, and Windows. And by using Oracle System Assistant, you can be sure that the proper OS drivers are installed for each of the installed hardware components. With Oracle System Assistant, initial setup of the server has never been easier. If we can innovate around problems and find solutions to make our servers easier to manage, this reduces IT costs and makes managing servers simpler. I think with Oracle System Assistant we have done just that. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Configure SQL Server to Allow Remote Connections

    - by Ben Griswold
    Okay. This post isn’t about configuring SQL to allow remote connections, but wait, I still may be able to help you out. "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server)" I love this exception. It summarized the issue and leads you down a path to solving the problem.  I do wish the bit about allowing remote connections was left out of the message though. I can’t think of a time when having remote connections disabled caused me grief.  Heck, I can’t ever remember how to enable remote connections unless I Google for the answer. Anyway, 9 out of 10 times, SQL Server simply isn’t running.  That’s why the exception occurs.  The next time this exception pops up, open up the services console and make sure SQL Server is started.  And if that’s not the problem, only then start digging into the other possible reasons for the failure.

    Read the article

  • copSHH how to restrict user from going back from there main root

    - by minus4
    I have installed SFTP on a windows servers using copSSH and all is good and it works well however you can go back from the main root. For example when i use C:\copSSH\home{username} as that user i can go back into copSSH and into them directories too. And I have a user setup to actually be C:\inetpub\wwwroot but that user can go into the system and everything i have this set as my path /cygdrive/c/inetpub/wwwroot It would be ideal if the user could only go forward from the start directory, rather than out and about there is no write ability but there is read and download....... thanks

    Read the article

  • Merit and demerits for various Linux fiberchannel multipath options

    - by wzzrd
    On our Linux servers, we currently use HPs qla2xxx drivers, because it has multipathing (active/passive) built in. The are, however, various other options, like Red Hats device-mapper-multipath with the stock qla2xxx drivers (multibus and failover) and things like SecurePath and PowerPath (both of which can do trunking, iirc). Can someone tell me what the merits and demerits of the various options are (if I can ask such a question), besides the obvious fact that the {Secure,Power}Path options cost vast amounts of money? I'm mainly interested in the freely available options, like HPs qla2xxx vs. Red Hats multipathd and possible other open source solutions, but I would like to hear good reasons to go for the commercial solutions too. UPDATE: I'll be benchmarking various options the coming few days (the average of 10 runs of iozone for each option (options being native qla2xxx failver, native qla2xxx multibus, HP qla2xxx failover)). I'll post a summary of results here for those interested.

    Read the article

  • Apache not running from rc.d on FreeBSD

    - by Oksana Molotova
    I'm using FreeBSD 8.3 and Apache 2.2. I didn't install Apache from ports, instead compiled it from source because I wanted to move the binary and configuration to a different path (I'm centering all of the major production daemons and their configurations in a single place). In any case, I based the /usr/local/etc/rc.d/apache22 file on one from a different server where it was installed from ports, I only modified the binary and config paths within. I can manually execute it with /usr/local/etc/rc.d/apache22 start, however even with apache22_enable="YES" in /etc/rc.conf it fails to start. All permissions and ownership are identical to the other server where it works. What am I missing and is there a way to debug this kind of thing?

    Read the article

  • How to efficiently permanently redirect 150.000 images?

    - by Fabio Spampinato
    For SEO purposes I need to rename around 150.000 images, then I'd like to permanently redirect the previous url locations requests to the new locations. The current url to every image is something like: website.com/something/unique_id/filename.jpg And I want to redirect them to: website.com/something/unique_id/new_filename.jpg I can only think about 2 options: 1) Create an enormous list of redirects to include into my nginx's conf file. 2) Redirect those requests to something like "website.com/new_location/unique_id" that will redirect the request again to the new path. There are other, better, options? Should I avoid multiple 301 redirects? Will crawlers downgrade my rankings because of multiple redirects?

    Read the article

  • Running Mac OSX 10.6 my users home directory is wrong

    - by Erik Miller
    Somehow my home directory on my mac machine has been changed and I'm not sure how to go about changing it back, I'm more of a linux guy and mac has some other mechanism for storing that information. Basically when I log into the machine normally than start a terminal window. I start in the /Users/erik_miller directory, which is my home directory, but when I run some this like cd ~ The machine tries to change to /Users/erik_miller. Yes, the same path with a period on the end. I can change my $HOME environment variable for the session, but the next time I start the machine it reverts. So, I think if I can find where that information is stored I can just change it there and hopefully all will be well.

    Read the article

  • /planes and /clubs or /wiki/planes and /wiki/clubs

    - by Jelmer
    I am currently working on a nice application about which I can't share all the details, but it will have some sort of a wiki part. In this wiki, you will be able to change the planes as well as the clubs, maybe in the future it will be possible to change the countries and manufacturers as well. But I have to think about this and I have to check how good this is. But, you will understand that it has to be expendable! That is really important. Use the planes controller with a edit page and the same for the clubs Route the planes and clubs controller to the wiki controller, so we have 1 nice "path" to edit this stuff. I want to have it called wiki that is for sure. Because that is what it is, but I am storing the planes and clubs data in its down table in my database. I think that is kinda obvious since it has to be maintainable. Right now you could edit a plane via the url: example.com/wiki/planes/edit/Duo_Discus.html Do you think that is better than example.com/planes/edit/Duo_Discus.html since it is easy to understand for the user, that he is working in the wiki instead of in the planes ? Or do you think this will break the user experience?

    Read the article

  • BizTalk: mapping with Xslt

    - by Leonid Ganeline
    BizTalk Map Editor (Mapper) is a good editor, especially in the last 2010 version of the BizTalk. But still sometimes it cannot do the tasks easily. It is time for the Xslt code, It is time to remember that the maps are executed by the Xslt engine.  Right-click the Mapper Grid (a field between the source and target schemas) and choose Properties /Custom XSLT Path.  Input here a name of the file with Xslt code. Only this code will be executed, forget the picture in the Mapper, all those links and functoids.  Let’s see the real-life example. There are two source Addresses. One is on the top level and the second is inside the Member_Address record with MaxOccurs=* . The target address is placed inside the Locator record with MaxOccurs=*. The requirement is to map all source address to the one target address structure. The source Xml document looks like: The result Xml should be like this: Try to do this mapping with the Mapper and you will spent good amount of time and the result map would be tricky. If we use the Xslt code, the mapping will be simple and unambiguous, like this: Simple, elegant.

    Read the article

  • Shockwave Flash crashes with Chromium and Firefox

    - by Stephan
    Since updating to Ubuntu 13.10, Shockwave Flash does not work in Chromium or Firefox. Both show a "Shockwave Flash has crashed" dialog. Chromium 29.0.1547.65 After loading a page with a Flash video, I get this warning on the console twice: NVIDIA: could not open the device file /dev/nvidia0 (Operation not permitted). When I try to play the video, it crashes and I receive these disorted error messages: (exe:14868): Gdk-WARNING **: XID collision, trouble ahead [xcb] Extra reply data still left in queue [xcb] This is most likely caused by a broken X extension library [xcb] Aborting, sorry about that. owser --type=plugin --plugin-path=/usr/lib/flashplugin-installer/libflashplayer.so --lang=de --channel=14560.18.20766867: ../../src/xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed. Firefox 25.0 With Firefox, I get these errors: ###!!! ABORT: Request 154.24: BadValue (integer parameter out of range for operation); 3 requests ago: file /build/buildd/firefox-25.0+build3/toolkit/xre/nsX11ErrorHandler.cpp, line 157 WARNING: pipe error (110): Connection reset by peer: file /build/buildd/firefox-25.0+build3/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 437 ###!!! [Parent][RPCChannel] Error: Channel error: cannot send/recv What I tried so far Reinstalling flashplugin-installer Changing permissions of /dev/nvidia0 It seems that Flash Aid is no longer available. GPU acceleration is working fine, e.g. for Portal. Does anyone know how to fix this?

    Read the article

  • Cannot set MemOpMode or ProcVirtualization with syscfg.exe

    - by EGr
    When attempting to set MemOpMode or ProcVirtualization with syscfg.exe, I get the following error: C:\Users\EGr>syscfg --MemOpMode=AdvEccMode System Services or CSIOR disabled In the past, I have been told that this can be resolved by forcefully reinstalling the Dell Lifecycle controller firmware, and then trying again; however, I cannot find my notes on how to do that. Has anyone ever run into this issue, and does anyone know how I can fix it? If it is possible to forcefully reinstall the firmware, how would I do that? I've tried running the installer, but it fails after running for ~2 min. I believe there is a way to fix this as well, by adding/modifying a registry value at HKLM\System\Control\CurrentControlSet\Services\IPMIDRV (that might not be the correct path, but I know it is IPMIDRV). Is this a common issue? What is the actual cause of it?

    Read the article

  • My sendmail sends spam and I can't identify which script sends it

    - by Andrew
    I've noticed one of my server is sending mass spam. The messages are like the one below (sending from: [email protected]). I've deleted USER_ACCOUNT but I'd like to know how can I identify the script (probably a hacked PHP script) that sends the mass mail considering this server hosts numerous websites. I0/83/968855 Mreturntosender: cannot select queue for postmaster: Broken pipe Fbn $_Unknown UID 1008@localhost ${daemon_flags}c u SUSER_ACCOUNT [email protected] H?P?Return-Path: <?g> H??Received: (from Unknown UID 1008@localhost) by benedictus.MYDOMAIN.COM (8.14.3/8.14.3/Submit) id q5H8Bx9A066412; Sun, 17 Jun 2012 11:11:59 +0300 (EEST) (envelope-from USER_ACCOUNT) H?D?Date: Sun, 17 Jun 2012 11:11:59 +0300 (EEST) H?M?Message-Id: <[email protected]> H??From: Tiffany June <[email protected]> H??To: "Fernando" <[email protected]> H??Subject: Tiffany June ADDED YOU to her Private Wish List H??MIME-Version: 1.0 H??Content-Type: multipart/related; boundary="=_8b944d33596415b2dd4371ef94e08aee

    Read the article

  • Keyboard locking up in Visual Studio 2010, Part 2

    - by Jim Wang
    Last week I posted about looking into the keyboard locking up issue in Visual Studio.  So far it looks like not a lot of people have replied to provide concrete repro steps, which confirms my suspicion that this is somewhat of a random issue. So at this point, I have a couple of choices.  I can either wait for somebody in the community to provide a repro of the problem that I can reliably run into, or I can do the work myself. I’m going to do both, so while I’m waiting for more possible bug reports, I’m going to write a tool that models the behavior of a typical Visual Studio user and use that to hopefully isolate the problem. I’ve chosen to go with this path since given the information in the bug reports, it seems people hit the issue with many different configurations in many different scenarios.  This means that me sitting down without any solid repro steps is likely not going to be a good use of time.  Instead, I’m going to go with a model-based testing approach where I will define a series of actions that a user in VS can do, and then proceed to run my model.  I’ll let you guys know how this works out for isolating bugs :) I’m using an internal tool for the model engine and AutoIt for the UI automation (I want something lightweight for a one-off).  One of the challenges will be getting feedback: AutoIt is great at driving, but not so great at understanding what success and failure means.

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • install headers-more-nginx-module to existing nginx

    - by Hunt
    i have installed nginx but now i want to install headers-more-nginx-module into existing installation of nginx , so can someone tell me how to do it ? i have found following commands but it is with the new installation of nginx and then headers-more-nginx-module wget 'http://nginx.org/download/nginx-1.2.4.tar.gz' tar -xzvf nginx-1.2.4.tar.gz cd nginx-1.2.4/ # Here we assume you would install you nginx under /opt/nginx/. ./configure --prefix=/opt/nginx \ --add-module=/path/to/headers-more-nginx-module make make install currently i guess my nginx is under where the nginx.conf is placed etc/nginx

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

  • Windows7 NFS with linux server

    - by Vitaly
    Hi. I have an Ubuntu server and want to access its web folder (/var/www). What I done: installed nfs-kernel-server, nfs-common and portmap (as in faq) Setted up /etc/exports: /var/www 192.168.1.0/255.255.255.0(rw,no_roow_squash,async,subtree_check) Then: sudo exportfs -ra Then: sudo /etc/init.d/nfs-kernle-server restart I checked, if all works on same machine: sudo 192.168.1.101:/var/www /mnt/test Then accessed /mnt/test and seen that all data present and all ok. Next, I tried to connect this folder to windows7 using NFS client: First, I checked, that linux exported path successfully: showmount -e 192.168.1.101 /var/www 192.168.1.0/255.255.255.0 All ok, go to mount: mount -o anon 192.168.1.101:/var/www z: Console said, that all success.. but. I cant access drive Z (drive exists in the system and point to right folder). When I try to access drive Z my Explorer just going to sleep and then say that timeout expired. Help me please.

    Read the article

  • How to fix Failed to initialize Windows Azure storage emulator error

    - by ybbest
    When you press F5 to start debugging Azure project, you might get the following exception: If you go to the Output windows, you will see the detailed error message below: Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Development Storage: the SQL Server instance ‘localhost\SQLExpress’ could not be found. Please configure the SQL Server instance for Development Storage using the ‘DSInit’ utility in the Windows Azure SDK. This is because by default, Azure uses the SQLExpress to start Development Storage. To fix this you can do the following: You need to open command prompt, and navigate to C:\Program Files\Windows Azure SDK\v1.4\bin\devstore (depending on your Azure version, the file path is slightly different.) Next, run DSInit /sqlInstance:. (. Means the SQL Server use the default instance, if you have name instance, you need to change. to the name of the SQL Server) After a short while, you should see the following windows showing the configuration succeeds. You can download a batch file here. References: http://msdn.microsoft.com/en-us/library/gg433132.aspx

    Read the article

  • OCS 2007 Access Edge Server Certificate issue

    - by BWCA
    We are currently building additional OCS 2007 R2 Access Edge Servers to handle additional capacity.  We ran into a SSL certificate issue when we were setting up the servers. Before running the steps to Deploy an Edge Server, we successfully imported our SSL certificate that we use for external access on all of the new servers.  After successfully completing the first three Deploy Edge Server steps one one of the new servers, we started working on Step 4: Configure Certificates for the Edge Server.  After selecting Assign an existing certificate from the common tasks list and clicking Next to select a certificate, there were no certificates listed as shown below.   The first thing we did was to use the Certificates mmc snap-in to review the SSL certificate information.  We noticed in the General tab that Windows does not have enough information to verify this certificate and in the Certification Path that the issuer of this certificate could not be found for the SSL certificate that we imported successfully earlier.     While troubleshooting, we learned that we could not access the URL for the certificate’s CRL to download the CRL file due to restrictive firewall rules between the new OCS 2007 R2 Access Edge Servers and the Internet. After modifying the firewall rules, we were able to download the CRL file and when we reran Step 4 to assign an existing certificate, the certificate was listed.

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >